2025-05-30 00:00:08.851530 | Job console starting 2025-05-30 00:00:08.868366 | Updating git repos 2025-05-30 00:00:09.268056 | Cloning repos into workspace 2025-05-30 00:00:09.437450 | Restoring repo states 2025-05-30 00:00:09.462113 | Merging changes 2025-05-30 00:00:09.462131 | Checking out repos 2025-05-30 00:00:09.839351 | Preparing playbooks 2025-05-30 00:00:10.436689 | Running Ansible setup 2025-05-30 00:00:15.623534 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-30 00:00:16.396750 | 2025-05-30 00:00:16.396972 | PLAY [Base pre] 2025-05-30 00:00:16.427091 | 2025-05-30 00:00:16.427281 | TASK [Setup log path fact] 2025-05-30 00:00:16.450079 | orchestrator | ok 2025-05-30 00:00:16.468111 | 2025-05-30 00:00:16.468252 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-30 00:00:16.507545 | orchestrator | ok 2025-05-30 00:00:16.519461 | 2025-05-30 00:00:16.519578 | TASK [emit-job-header : Print job information] 2025-05-30 00:00:16.566741 | # Job Information 2025-05-30 00:00:16.566917 | Ansible Version: 2.16.14 2025-05-30 00:00:16.566962 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-30 00:00:16.566998 | Pipeline: periodic-midnight 2025-05-30 00:00:16.567021 | Executor: 521e9411259a 2025-05-30 00:00:16.567043 | Triggered by: https://github.com/osism/testbed 2025-05-30 00:00:16.567066 | Event ID: a78c99f45af94b4694e32ae3ed840fe5 2025-05-30 00:00:16.573555 | 2025-05-30 00:00:16.573651 | LOOP [emit-job-header : Print node information] 2025-05-30 00:00:16.728697 | orchestrator | ok: 2025-05-30 00:00:16.728862 | orchestrator | # Node Information 2025-05-30 00:00:16.728897 | orchestrator | Inventory Hostname: orchestrator 2025-05-30 00:00:16.728923 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-30 00:00:16.728946 | orchestrator | Username: zuul-testbed04 2025-05-30 00:00:16.728997 | orchestrator | Distro: Debian 12.11 2025-05-30 00:00:16.729023 | orchestrator | Provider: static-testbed 2025-05-30 00:00:16.729045 | orchestrator | Region: 2025-05-30 00:00:16.729066 | orchestrator | Label: testbed-orchestrator 2025-05-30 00:00:16.729086 | orchestrator | Product Name: OpenStack Nova 2025-05-30 00:00:16.729152 | orchestrator | Interface IP: 81.163.193.140 2025-05-30 00:00:16.752882 | 2025-05-30 00:00:16.753022 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-30 00:00:17.241703 | orchestrator -> localhost | changed 2025-05-30 00:00:17.250942 | 2025-05-30 00:00:17.251083 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-30 00:00:18.384561 | orchestrator -> localhost | changed 2025-05-30 00:00:18.402221 | 2025-05-30 00:00:18.402420 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-30 00:00:18.750904 | orchestrator -> localhost | ok 2025-05-30 00:00:18.762867 | 2025-05-30 00:00:18.763007 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-30 00:00:18.804534 | orchestrator | ok 2025-05-30 00:00:18.830909 | orchestrator | included: /var/lib/zuul/builds/94fe11b4cd544891847b158adf92cff0/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-30 00:00:18.841875 | 2025-05-30 00:00:18.842000 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-30 00:00:20.789882 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-30 00:00:20.790104 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/94fe11b4cd544891847b158adf92cff0/work/94fe11b4cd544891847b158adf92cff0_id_rsa 2025-05-30 00:00:20.790146 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/94fe11b4cd544891847b158adf92cff0/work/94fe11b4cd544891847b158adf92cff0_id_rsa.pub 2025-05-30 00:00:20.790182 | orchestrator -> localhost | The key fingerprint is: 2025-05-30 00:00:20.790219 | orchestrator -> localhost | SHA256:ak6cd7NdtyVviyzWobhNS4KEMshAapqlV/IMpis0fbU zuul-build-sshkey 2025-05-30 00:00:20.790253 | orchestrator -> localhost | The key's randomart image is: 2025-05-30 00:00:20.790290 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-30 00:00:20.790313 | orchestrator -> localhost | | . | 2025-05-30 00:00:20.790335 | orchestrator -> localhost | |o | 2025-05-30 00:00:20.790355 | orchestrator -> localhost | |o.= . . | 2025-05-30 00:00:20.790374 | orchestrator -> localhost | |=*o* o . | 2025-05-30 00:00:20.790394 | orchestrator -> localhost | |==.+oo ES | 2025-05-30 00:00:20.790418 | orchestrator -> localhost | |.o. +..o. . | 2025-05-30 00:00:20.790439 | orchestrator -> localhost | |o *...+oo + o| 2025-05-30 00:00:20.790459 | orchestrator -> localhost | |. + . o=Boo.++| 2025-05-30 00:00:20.790480 | orchestrator -> localhost | | . .+ooo +o| 2025-05-30 00:00:20.790501 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-30 00:00:20.790556 | orchestrator -> localhost | ok: Runtime: 0:00:01.398948 2025-05-30 00:00:20.798292 | 2025-05-30 00:00:20.798407 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-30 00:00:20.853803 | orchestrator | ok 2025-05-30 00:00:20.893129 | orchestrator | included: /var/lib/zuul/builds/94fe11b4cd544891847b158adf92cff0/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-30 00:00:20.903085 | 2025-05-30 00:00:20.903200 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-30 00:00:20.926826 | orchestrator | skipping: Conditional result was False 2025-05-30 00:00:20.935981 | 2025-05-30 00:00:20.936100 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-30 00:00:21.514376 | orchestrator | changed 2025-05-30 00:00:21.521082 | 2025-05-30 00:00:21.521211 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-30 00:00:21.821457 | orchestrator | ok 2025-05-30 00:00:21.829840 | 2025-05-30 00:00:21.830027 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-30 00:00:22.304788 | orchestrator | ok 2025-05-30 00:00:22.314606 | 2025-05-30 00:00:22.314746 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-30 00:00:22.764249 | orchestrator | ok 2025-05-30 00:00:22.772935 | 2025-05-30 00:00:22.773293 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-30 00:00:22.798226 | orchestrator | skipping: Conditional result was False 2025-05-30 00:00:22.809967 | 2025-05-30 00:00:22.810156 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-30 00:00:23.302279 | orchestrator -> localhost | changed 2025-05-30 00:00:23.315985 | 2025-05-30 00:00:23.316105 | TASK [add-build-sshkey : Add back temp key] 2025-05-30 00:00:23.963070 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/94fe11b4cd544891847b158adf92cff0/work/94fe11b4cd544891847b158adf92cff0_id_rsa (zuul-build-sshkey) 2025-05-30 00:00:23.963277 | orchestrator -> localhost | ok: Runtime: 0:00:00.028843 2025-05-30 00:00:23.974974 | 2025-05-30 00:00:23.975079 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-30 00:00:25.055516 | orchestrator | ok 2025-05-30 00:00:25.082916 | 2025-05-30 00:00:25.084589 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-30 00:00:25.155323 | orchestrator | skipping: Conditional result was False 2025-05-30 00:00:25.317901 | 2025-05-30 00:00:25.318056 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-30 00:00:26.418203 | orchestrator | ok 2025-05-30 00:00:26.483110 | 2025-05-30 00:00:26.483278 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-30 00:00:26.614256 | orchestrator | ok 2025-05-30 00:00:26.660605 | 2025-05-30 00:00:26.660759 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-30 00:00:27.797542 | orchestrator -> localhost | ok 2025-05-30 00:00:27.804906 | 2025-05-30 00:00:27.805034 | TASK [validate-host : Collect information about the host] 2025-05-30 00:00:29.171172 | orchestrator | ok 2025-05-30 00:00:29.200565 | 2025-05-30 00:00:29.200688 | TASK [validate-host : Sanitize hostname] 2025-05-30 00:00:29.285206 | orchestrator | ok 2025-05-30 00:00:29.297591 | 2025-05-30 00:00:29.297698 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-30 00:00:30.786374 | orchestrator -> localhost | changed 2025-05-30 00:00:30.792335 | 2025-05-30 00:00:30.792508 | TASK [validate-host : Collect information about zuul worker] 2025-05-30 00:00:31.322665 | orchestrator | ok 2025-05-30 00:00:31.327998 | 2025-05-30 00:00:31.328106 | TASK [validate-host : Write out all zuul information for each host] 2025-05-30 00:00:31.948175 | orchestrator -> localhost | changed 2025-05-30 00:00:31.958335 | 2025-05-30 00:00:31.958434 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-30 00:00:32.283505 | orchestrator | ok 2025-05-30 00:00:32.289201 | 2025-05-30 00:00:32.289298 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-30 00:00:48.677473 | orchestrator | changed: 2025-05-30 00:00:48.677696 | orchestrator | .d..t...... src/ 2025-05-30 00:00:48.677732 | orchestrator | .d..t...... src/github.com/ 2025-05-30 00:00:48.677757 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-30 00:00:48.677780 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-30 00:00:48.677800 | orchestrator | RedHat.yml 2025-05-30 00:00:48.707683 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-30 00:00:48.707700 | orchestrator | RedHat.yml 2025-05-30 00:00:48.707753 | orchestrator | = 1.53.0"... 2025-05-30 00:01:03.856320 | orchestrator | 00:01:03.855 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-30 00:01:03.939257 | orchestrator | 00:01:03.939 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-30 00:01:13.088613 | orchestrator | 00:01:13.088 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-05-30 00:01:14.348830 | orchestrator | 00:01:14.348 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-05-30 00:01:15.853103 | orchestrator | 00:01:15.852 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-30 00:01:16.784217 | orchestrator | 00:01:16.783 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-30 00:01:18.269010 | orchestrator | 00:01:18.268 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-30 00:01:19.819853 | orchestrator | 00:01:19.819 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-30 00:01:19.819933 | orchestrator | 00:01:19.819 STDOUT terraform: Providers are signed by their developers. 2025-05-30 00:01:19.819947 | orchestrator | 00:01:19.819 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-30 00:01:19.819954 | orchestrator | 00:01:19.819 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-30 00:01:19.820596 | orchestrator | 00:01:19.819 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-30 00:01:19.820735 | orchestrator | 00:01:19.820 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-30 00:01:19.820760 | orchestrator | 00:01:19.820 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-30 00:01:19.820775 | orchestrator | 00:01:19.820 STDOUT terraform: you run "tofu init" in the future. 2025-05-30 00:01:19.820790 | orchestrator | 00:01:19.820 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-30 00:01:19.820817 | orchestrator | 00:01:19.820 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-30 00:01:19.822164 | orchestrator | 00:01:19.821 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-30 00:01:19.822233 | orchestrator | 00:01:19.822 STDOUT terraform: should now work. 2025-05-30 00:01:19.822354 | orchestrator | 00:01:19.822 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-30 00:01:19.822497 | orchestrator | 00:01:19.822 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-30 00:01:19.822674 | orchestrator | 00:01:19.822 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-30 00:01:19.999521 | orchestrator | 00:01:19.999 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-30 00:01:20.202263 | orchestrator | 00:01:20.201 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-30 00:01:20.202431 | orchestrator | 00:01:20.202 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-30 00:01:20.202671 | orchestrator | 00:01:20.202 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-30 00:01:20.202780 | orchestrator | 00:01:20.202 STDOUT terraform: for this configuration. 2025-05-30 00:01:20.463462 | orchestrator | 00:01:20.463 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-30 00:01:20.566309 | orchestrator | 00:01:20.566 STDOUT terraform: ci.auto.tfvars 2025-05-30 00:01:20.571971 | orchestrator | 00:01:20.571 STDOUT terraform: default_custom.tf 2025-05-30 00:01:20.776922 | orchestrator | 00:01:20.776 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-30 00:01:21.772666 | orchestrator | 00:01:21.772 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-30 00:01:22.305232 | orchestrator | 00:01:22.304 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-30 00:01:22.491260 | orchestrator | 00:01:22.490 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-30 00:01:22.491351 | orchestrator | 00:01:22.491 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-30 00:01:22.491396 | orchestrator | 00:01:22.491 STDOUT terraform:  + create 2025-05-30 00:01:22.491442 | orchestrator | 00:01:22.491 STDOUT terraform:  <= read (data resources) 2025-05-30 00:01:22.491525 | orchestrator | 00:01:22.491 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-30 00:01:22.491668 | orchestrator | 00:01:22.491 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-30 00:01:22.491778 | orchestrator | 00:01:22.491 STDOUT terraform:  # (config refers to values not yet known) 2025-05-30 00:01:22.491862 | orchestrator | 00:01:22.491 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-30 00:01:22.491943 | orchestrator | 00:01:22.491 STDOUT terraform:  + checksum = (known after apply) 2025-05-30 00:01:22.492018 | orchestrator | 00:01:22.491 STDOUT terraform:  + created_at = (known after apply) 2025-05-30 00:01:22.492098 | orchestrator | 00:01:22.492 STDOUT terraform:  + file = (known after apply) 2025-05-30 00:01:22.492179 | orchestrator | 00:01:22.492 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.492275 | orchestrator | 00:01:22.492 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.492345 | orchestrator | 00:01:22.492 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-30 00:01:22.492422 | orchestrator | 00:01:22.492 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-30 00:01:22.492478 | orchestrator | 00:01:22.492 STDOUT terraform:  + most_recent = true 2025-05-30 00:01:22.492563 | orchestrator | 00:01:22.492 STDOUT terraform:  + name = (known after apply) 2025-05-30 00:01:22.492640 | orchestrator | 00:01:22.492 STDOUT terraform:  + protected = (known after apply) 2025-05-30 00:01:22.492751 | orchestrator | 00:01:22.492 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.492826 | orchestrator | 00:01:22.492 STDOUT terraform:  + schema = (known after apply) 2025-05-30 00:01:22.492900 | orchestrator | 00:01:22.492 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-30 00:01:22.492981 | orchestrator | 00:01:22.492 STDOUT terraform:  + tags = (known after apply) 2025-05-30 00:01:22.493087 | orchestrator | 00:01:22.492 STDOUT terraform:  + updated_at = (known after apply) 2025-05-30 00:01:22.493127 | orchestrator | 00:01:22.493 STDOUT terraform:  } 2025-05-30 00:01:22.493256 | orchestrator | 00:01:22.493 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-30 00:01:22.493333 | orchestrator | 00:01:22.493 STDOUT terraform:  # (config refers to values not yet known) 2025-05-30 00:01:22.493431 | orchestrator | 00:01:22.493 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-30 00:01:22.493508 | orchestrator | 00:01:22.493 STDOUT terraform:  + checksum = (known after apply) 2025-05-30 00:01:22.493587 | orchestrator | 00:01:22.493 STDOUT terraform:  + created_at = (known after apply) 2025-05-30 00:01:22.493665 | orchestrator | 00:01:22.493 STDOUT terraform:  + file = (known after apply) 2025-05-30 00:01:22.493792 | orchestrator | 00:01:22.493 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.493913 | orchestrator | 00:01:22.493 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.493996 | orchestrator | 00:01:22.493 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-30 00:01:22.494091 | orchestrator | 00:01:22.493 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-30 00:01:22.494136 | orchestrator | 00:01:22.494 STDOUT terraform:  + most_recent = true 2025-05-30 00:01:22.494210 | orchestrator | 00:01:22.494 STDOUT terraform:  + name = (known after apply) 2025-05-30 00:01:22.494277 | orchestrator | 00:01:22.494 STDOUT terraform:  + protected = (known after apply) 2025-05-30 00:01:22.494349 | orchestrator | 00:01:22.494 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.494432 | orchestrator | 00:01:22.494 STDOUT terraform:  + schema = (known after apply) 2025-05-30 00:01:22.494504 | orchestrator | 00:01:22.494 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-30 00:01:22.494585 | orchestrator | 00:01:22.494 STDOUT terraform:  + tags = (known after apply) 2025-05-30 00:01:22.494634 | orchestrator | 00:01:22.494 STDOUT terraform:  + updated_at = (known after apply) 2025-05-30 00:01:22.494667 | orchestrator | 00:01:22.494 STDOUT terraform:  } 2025-05-30 00:01:22.494787 | orchestrator | 00:01:22.494 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-30 00:01:22.494855 | orchestrator | 00:01:22.494 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-30 00:01:22.494943 | orchestrator | 00:01:22.494 STDOUT terraform:  + content = (known after apply) 2025-05-30 00:01:22.495028 | orchestrator | 00:01:22.494 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-30 00:01:22.495112 | orchestrator | 00:01:22.495 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-30 00:01:22.495202 | orchestrator | 00:01:22.495 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-30 00:01:22.495280 | orchestrator | 00:01:22.495 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-30 00:01:22.495368 | orchestrator | 00:01:22.495 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-30 00:01:22.495451 | orchestrator | 00:01:22.495 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-30 00:01:22.495508 | orchestrator | 00:01:22.495 STDOUT terraform:  + directory_permission = "0777" 2025-05-30 00:01:22.495567 | orchestrator | 00:01:22.495 STDOUT terraform:  + file_permission = "0644" 2025-05-30 00:01:22.495651 | orchestrator | 00:01:22.495 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-30 00:01:22.495752 | orchestrator | 00:01:22.495 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.495783 | orchestrator | 00:01:22.495 STDOUT terraform:  } 2025-05-30 00:01:22.495849 | orchestrator | 00:01:22.495 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-30 00:01:22.495909 | orchestrator | 00:01:22.495 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-30 00:01:22.495994 | orchestrator | 00:01:22.495 STDOUT terraform:  + content = (known after apply) 2025-05-30 00:01:22.496081 | orchestrator | 00:01:22.495 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-30 00:01:22.496162 | orchestrator | 00:01:22.496 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-30 00:01:22.496246 | orchestrator | 00:01:22.496 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-30 00:01:22.496340 | orchestrator | 00:01:22.496 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-30 00:01:22.496431 | orchestrator | 00:01:22.496 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-30 00:01:22.496519 | orchestrator | 00:01:22.496 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-30 00:01:22.496566 | orchestrator | 00:01:22.496 STDOUT terraform:  + directory_permission = "0777" 2025-05-30 00:01:22.496621 | orchestrator | 00:01:22.496 STDOUT terraform:  + file_permission = "0644" 2025-05-30 00:01:22.496732 | orchestrator | 00:01:22.496 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-30 00:01:22.496826 | orchestrator | 00:01:22.496 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.496853 | orchestrator | 00:01:22.496 STDOUT terraform:  } 2025-05-30 00:01:22.496909 | orchestrator | 00:01:22.496 STDOUT terraform:  # local_file.inventory will be created 2025-05-30 00:01:22.496967 | orchestrator | 00:01:22.496 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-30 00:01:22.497043 | orchestrator | 00:01:22.496 STDOUT terraform:  + content = (known after apply) 2025-05-30 00:01:22.497111 | orchestrator | 00:01:22.497 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-30 00:01:22.497179 | orchestrator | 00:01:22.497 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-30 00:01:22.497250 | orchestrator | 00:01:22.497 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-30 00:01:22.497323 | orchestrator | 00:01:22.497 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-30 00:01:22.497389 | orchestrator | 00:01:22.497 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-30 00:01:22.497457 | orchestrator | 00:01:22.497 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-30 00:01:22.497505 | orchestrator | 00:01:22.497 STDOUT terraform:  + directory_permission = "0777" 2025-05-30 00:01:22.497551 | orchestrator | 00:01:22.497 STDOUT terraform:  + file_permission = "0644" 2025-05-30 00:01:22.497610 | orchestrator | 00:01:22.497 STDOUT terraform:  + filename = "inventory.ci" 2025-05-30 00:01:22.497680 | orchestrator | 00:01:22.497 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.497719 | orchestrator | 00:01:22.497 STDOUT terraform:  } 2025-05-30 00:01:22.497774 | orchestrator | 00:01:22.497 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-30 00:01:22.497832 | orchestrator | 00:01:22.497 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-30 00:01:22.497897 | orchestrator | 00:01:22.497 STDOUT terraform:  + content = (sensitive value) 2025-05-30 00:01:22.497963 | orchestrator | 00:01:22.497 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-30 00:01:22.498049 | orchestrator | 00:01:22.497 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-30 00:01:22.498118 | orchestrator | 00:01:22.498 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-30 00:01:22.498186 | orchestrator | 00:01:22.498 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-30 00:01:22.498255 | orchestrator | 00:01:22.498 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-30 00:01:22.498324 | orchestrator | 00:01:22.498 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-30 00:01:22.498396 | orchestrator | 00:01:22.498 STDOUT terraform:  + directory_permission = "0700" 2025-05-30 00:01:22.498418 | orchestrator | 00:01:22.498 STDOUT terraform:  + file_permission = "0600" 2025-05-30 00:01:22.498476 | orchestrator | 00:01:22.498 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-30 00:01:22.498548 | orchestrator | 00:01:22.498 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.498572 | orchestrator | 00:01:22.498 STDOUT terraform:  } 2025-05-30 00:01:22.498629 | orchestrator | 00:01:22.498 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-30 00:01:22.498686 | orchestrator | 00:01:22.498 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-30 00:01:22.498839 | orchestrator | 00:01:22.498 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.498914 | orchestrator | 00:01:22.498 STDOUT terraform:  } 2025-05-30 00:01:22.498939 | orchestrator | 00:01:22.498 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-30 00:01:22.498953 | orchestrator | 00:01:22.498 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-30 00:01:22.499074 | orchestrator | 00:01:22.498 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.499097 | orchestrator | 00:01:22.499 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.499124 | orchestrator | 00:01:22.499 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.499192 | orchestrator | 00:01:22.499 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.499261 | orchestrator | 00:01:22.499 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.499349 | orchestrator | 00:01:22.499 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-30 00:01:22.499421 | orchestrator | 00:01:22.499 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.499457 | orchestrator | 00:01:22.499 STDOUT terraform:  + size = 80 2025-05-30 00:01:22.499503 | orchestrator | 00:01:22.499 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.499549 | orchestrator | 00:01:22.499 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.499575 | orchestrator | 00:01:22.499 STDOUT terraform:  } 2025-05-30 00:01:22.499669 | orchestrator | 00:01:22.499 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-30 00:01:22.499796 | orchestrator | 00:01:22.499 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-30 00:01:22.499860 | orchestrator | 00:01:22.499 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.499906 | orchestrator | 00:01:22.499 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.499978 | orchestrator | 00:01:22.499 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.500054 | orchestrator | 00:01:22.499 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.500124 | orchestrator | 00:01:22.500 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.500212 | orchestrator | 00:01:22.500 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-30 00:01:22.500277 | orchestrator | 00:01:22.500 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.500311 | orchestrator | 00:01:22.500 STDOUT terraform:  + size = 80 2025-05-30 00:01:22.500352 | orchestrator | 00:01:22.500 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.500391 | orchestrator | 00:01:22.500 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.500412 | orchestrator | 00:01:22.500 STDOUT terraform:  } 2025-05-30 00:01:22.500494 | orchestrator | 00:01:22.500 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-30 00:01:22.500569 | orchestrator | 00:01:22.500 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-30 00:01:22.500630 | orchestrator | 00:01:22.500 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.500668 | orchestrator | 00:01:22.500 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.500751 | orchestrator | 00:01:22.500 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.500812 | orchestrator | 00:01:22.500 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.500872 | orchestrator | 00:01:22.500 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.500949 | orchestrator | 00:01:22.500 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-30 00:01:22.501009 | orchestrator | 00:01:22.500 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.501042 | orchestrator | 00:01:22.501 STDOUT terraform:  + size = 80 2025-05-30 00:01:22.501083 | orchestrator | 00:01:22.501 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.501122 | orchestrator | 00:01:22.501 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.501143 | orchestrator | 00:01:22.501 STDOUT terraform:  } 2025-05-30 00:01:22.501220 | orchestrator | 00:01:22.501 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-30 00:01:22.501297 | orchestrator | 00:01:22.501 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-30 00:01:22.501357 | orchestrator | 00:01:22.501 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.501396 | orchestrator | 00:01:22.501 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.501456 | orchestrator | 00:01:22.501 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.501517 | orchestrator | 00:01:22.501 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.502197 | orchestrator | 00:01:22.501 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.502279 | orchestrator | 00:01:22.502 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-30 00:01:22.502336 | orchestrator | 00:01:22.502 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.502364 | orchestrator | 00:01:22.502 STDOUT terraform:  + size = 80 2025-05-30 00:01:22.502403 | orchestrator | 00:01:22.502 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.502441 | orchestrator | 00:01:22.502 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.502461 | orchestrator | 00:01:22.502 STDOUT terraform:  } 2025-05-30 00:01:22.502540 | orchestrator | 00:01:22.502 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-30 00:01:22.502605 | orchestrator | 00:01:22.502 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-30 00:01:22.502661 | orchestrator | 00:01:22.502 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.502704 | orchestrator | 00:01:22.502 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.502761 | orchestrator | 00:01:22.502 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.502816 | orchestrator | 00:01:22.502 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.502874 | orchestrator | 00:01:22.502 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.502943 | orchestrator | 00:01:22.502 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-30 00:01:22.503000 | orchestrator | 00:01:22.502 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.503032 | orchestrator | 00:01:22.502 STDOUT terraform:  + size = 80 2025-05-30 00:01:22.503071 | orchestrator | 00:01:22.503 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.503109 | orchestrator | 00:01:22.503 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.503129 | orchestrator | 00:01:22.503 STDOUT terraform:  } 2025-05-30 00:01:22.503203 | orchestrator | 00:01:22.503 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-30 00:01:22.503273 | orchestrator | 00:01:22.503 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-30 00:01:22.503329 | orchestrator | 00:01:22.503 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.503365 | orchestrator | 00:01:22.503 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.503422 | orchestrator | 00:01:22.503 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.503477 | orchestrator | 00:01:22.503 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.503531 | orchestrator | 00:01:22.503 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.503602 | orchestrator | 00:01:22.503 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-30 00:01:22.503658 | orchestrator | 00:01:22.503 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.503690 | orchestrator | 00:01:22.503 STDOUT terraform:  + size = 80 2025-05-30 00:01:22.503740 | orchestrator | 00:01:22.503 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.503774 | orchestrator | 00:01:22.503 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.503794 | orchestrator | 00:01:22.503 STDOUT terraform:  } 2025-05-30 00:01:22.503866 | orchestrator | 00:01:22.503 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-30 00:01:22.503938 | orchestrator | 00:01:22.503 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-30 00:01:22.503992 | orchestrator | 00:01:22.503 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.504031 | orchestrator | 00:01:22.503 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.504087 | orchestrator | 00:01:22.504 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.504142 | orchestrator | 00:01:22.504 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.504197 | orchestrator | 00:01:22.504 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.504268 | orchestrator | 00:01:22.504 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-30 00:01:22.504327 | orchestrator | 00:01:22.504 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.504356 | orchestrator | 00:01:22.504 STDOUT terraform:  + size = 80 2025-05-30 00:01:22.504393 | orchestrator | 00:01:22.504 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.504431 | orchestrator | 00:01:22.504 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.504450 | orchestrator | 00:01:22.504 STDOUT terraform:  } 2025-05-30 00:01:22.504520 | orchestrator | 00:01:22.504 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-30 00:01:22.504587 | orchestrator | 00:01:22.504 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-30 00:01:22.504642 | orchestrator | 00:01:22.504 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.504679 | orchestrator | 00:01:22.504 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.504763 | orchestrator | 00:01:22.504 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.504821 | orchestrator | 00:01:22.504 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.504882 | orchestrator | 00:01:22.504 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-30 00:01:22.504937 | orchestrator | 00:01:22.504 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.504969 | orchestrator | 00:01:22.504 STDOUT terraform:  + size = 20 2025-05-30 00:01:22.505002 | orchestrator | 00:01:22.504 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.505036 | orchestrator | 00:01:22.505 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.505054 | orchestrator | 00:01:22.505 STDOUT terraform:  } 2025-05-30 00:01:22.505115 | orchestrator | 00:01:22.505 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-30 00:01:22.505176 | orchestrator | 00:01:22.505 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-30 00:01:22.505223 | orchestrator | 00:01:22.505 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.505255 | orchestrator | 00:01:22.505 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.505305 | orchestrator | 00:01:22.505 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.505353 | orchestrator | 00:01:22.505 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.505407 | orchestrator | 00:01:22.505 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-30 00:01:22.505455 | orchestrator | 00:01:22.505 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.505483 | orchestrator | 00:01:22.505 STDOUT terraform:  + size = 20 2025-05-30 00:01:22.505515 | orchestrator | 00:01:22.505 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.505549 | orchestrator | 00:01:22.505 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.505566 | orchestrator | 00:01:22.505 STDOUT terraform:  } 2025-05-30 00:01:22.505626 | orchestrator | 00:01:22.505 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-30 00:01:22.505723 | orchestrator | 00:01:22.505 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-30 00:01:22.505770 | orchestrator | 00:01:22.505 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.505804 | orchestrator | 00:01:22.505 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.505853 | orchestrator | 00:01:22.505 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.505902 | orchestrator | 00:01:22.505 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.505955 | orchestrator | 00:01:22.505 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-30 00:01:22.506004 | orchestrator | 00:01:22.505 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.506047 | orchestrator | 00:01:22.505 STDOUT terraform:  + size = 20 2025-05-30 00:01:22.506076 | orchestrator | 00:01:22.506 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.506108 | orchestrator | 00:01:22.506 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.506126 | orchestrator | 00:01:22.506 STDOUT terraform:  } 2025-05-30 00:01:22.506186 | orchestrator | 00:01:22.506 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-30 00:01:22.506246 | orchestrator | 00:01:22.506 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-30 00:01:22.506302 | orchestrator | 00:01:22.506 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.506334 | orchestrator | 00:01:22.506 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.506384 | orchestrator | 00:01:22.506 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.506433 | orchestrator | 00:01:22.506 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.506486 | orchestrator | 00:01:22.506 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-30 00:01:22.506534 | orchestrator | 00:01:22.506 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.506561 | orchestrator | 00:01:22.506 STDOUT terraform:  + size = 20 2025-05-30 00:01:22.506596 | orchestrator | 00:01:22.506 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.506627 | orchestrator | 00:01:22.506 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.506645 | orchestrator | 00:01:22.506 STDOUT terraform:  } 2025-05-30 00:01:22.506715 | orchestrator | 00:01:22.506 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-30 00:01:22.506773 | orchestrator | 00:01:22.506 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-30 00:01:22.506822 | orchestrator | 00:01:22.506 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.506855 | orchestrator | 00:01:22.506 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.506904 | orchestrator | 00:01:22.506 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.506953 | orchestrator | 00:01:22.506 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.507006 | orchestrator | 00:01:22.506 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-30 00:01:22.507055 | orchestrator | 00:01:22.507 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.507084 | orchestrator | 00:01:22.507 STDOUT terraform:  + size = 20 2025-05-30 00:01:22.507116 | orchestrator | 00:01:22.507 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.507149 | orchestrator | 00:01:22.507 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.507166 | orchestrator | 00:01:22.507 STDOUT terraform:  } 2025-05-30 00:01:22.507226 | orchestrator | 00:01:22.507 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-30 00:01:22.507284 | orchestrator | 00:01:22.507 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-30 00:01:22.507332 | orchestrator | 00:01:22.507 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.507364 | orchestrator | 00:01:22.507 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.507414 | orchestrator | 00:01:22.507 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.507463 | orchestrator | 00:01:22.507 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.507515 | orchestrator | 00:01:22.507 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-30 00:01:22.507565 | orchestrator | 00:01:22.507 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.507594 | orchestrator | 00:01:22.507 STDOUT terraform:  + size = 20 2025-05-30 00:01:22.507626 | orchestrator | 00:01:22.507 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.507658 | orchestrator | 00:01:22.507 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.507679 | orchestrator | 00:01:22.507 STDOUT terraform:  } 2025-05-30 00:01:22.507755 | orchestrator | 00:01:22.507 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-30 00:01:22.507808 | orchestrator | 00:01:22.507 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-30 00:01:22.507856 | orchestrator | 00:01:22.507 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.507889 | orchestrator | 00:01:22.507 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.507938 | orchestrator | 00:01:22.507 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.507987 | orchestrator | 00:01:22.507 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.508039 | orchestrator | 00:01:22.507 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-30 00:01:22.508088 | orchestrator | 00:01:22.508 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.508116 | orchestrator | 00:01:22.508 STDOUT terraform:  + size = 20 2025-05-30 00:01:22.508151 | orchestrator | 00:01:22.508 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.508180 | orchestrator | 00:01:22.508 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.508199 | orchestrator | 00:01:22.508 STDOUT terraform:  } 2025-05-30 00:01:22.508260 | orchestrator | 00:01:22.508 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-30 00:01:22.508319 | orchestrator | 00:01:22.508 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-30 00:01:22.508368 | orchestrator | 00:01:22.508 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.508400 | orchestrator | 00:01:22.508 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.508449 | orchestrator | 00:01:22.508 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.508498 | orchestrator | 00:01:22.508 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.508552 | orchestrator | 00:01:22.508 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-30 00:01:22.508601 | orchestrator | 00:01:22.508 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.508629 | orchestrator | 00:01:22.508 STDOUT terraform:  + size = 20 2025-05-30 00:01:22.508661 | orchestrator | 00:01:22.508 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.508702 | orchestrator | 00:01:22.508 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.508733 | orchestrator | 00:01:22.508 STDOUT terraform:  } 2025-05-30 00:01:22.508868 | orchestrator | 00:01:22.508 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-30 00:01:22.508922 | orchestrator | 00:01:22.508 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-30 00:01:22.508970 | orchestrator | 00:01:22.508 STDOUT terraform:  + attachment = (known after apply) 2025-05-30 00:01:22.508999 | orchestrator | 00:01:22.508 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.509044 | orchestrator | 00:01:22.508 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.509087 | orchestrator | 00:01:22.509 STDOUT terraform:  + metadata = (known after apply) 2025-05-30 00:01:22.509136 | orchestrator | 00:01:22.509 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-30 00:01:22.509180 | orchestrator | 00:01:22.509 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.509205 | orchestrator | 00:01:22.509 STDOUT terraform:  + size = 20 2025-05-30 00:01:22.509234 | orchestrator | 00:01:22.509 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-30 00:01:22.509262 | orchestrator | 00:01:22.509 STDOUT terraform:  + volume_type = "ssd" 2025-05-30 00:01:22.509277 | orchestrator | 00:01:22.509 STDOUT terraform:  } 2025-05-30 00:01:22.509330 | orchestrator | 00:01:22.509 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-30 00:01:22.509383 | orchestrator | 00:01:22.509 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-30 00:01:22.509426 | orchestrator | 00:01:22.509 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-30 00:01:22.509467 | orchestrator | 00:01:22.509 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-30 00:01:22.509510 | orchestrator | 00:01:22.509 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-30 00:01:22.509553 | orchestrator | 00:01:22.509 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.509582 | orchestrator | 00:01:22.509 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.509606 | orchestrator | 00:01:22.509 STDOUT terraform:  + config_drive = true 2025-05-30 00:01:22.509650 | orchestrator | 00:01:22.509 STDOUT terraform:  + created = (known after apply) 2025-05-30 00:01:22.509692 | orchestrator | 00:01:22.509 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-30 00:01:22.509736 | orchestrator | 00:01:22.509 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-30 00:01:22.509764 | orchestrator | 00:01:22.509 STDOUT terraform:  + force_delete = false 2025-05-30 00:01:22.509805 | orchestrator | 00:01:22.509 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-30 00:01:22.509849 | orchestrator | 00:01:22.509 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.509894 | orchestrator | 00:01:22.509 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.509937 | orchestrator | 00:01:22.509 STDOUT terraform:  + image_name = (known after apply) 2025-05-30 00:01:22.509966 | orchestrator | 00:01:22.509 STDOUT terraform:  + key_pair = "testbed" 2025-05-30 00:01:22.510004 | orchestrator | 00:01:22.509 STDOUT terraform:  + name = "testbed-manager" 2025-05-30 00:01:22.510046 | orchestrator | 00:01:22.510 STDOUT terraform:  + power_state = "active" 2025-05-30 00:01:22.510090 | orchestrator | 00:01:22.510 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.510134 | orchestrator | 00:01:22.510 STDOUT terraform:  + security_groups = (known after apply) 2025-05-30 00:01:22.510161 | orchestrator | 00:01:22.510 STDOUT terraform:  + stop_before_destroy = false 2025-05-30 00:01:22.510204 | orchestrator | 00:01:22.510 STDOUT terraform:  + updated = (known after apply) 2025-05-30 00:01:22.510247 | orchestrator | 00:01:22.510 STDOUT terraform:  + user_data = (known after apply) 2025-05-30 00:01:22.510267 | orchestrator | 00:01:22.510 STDOUT terraform:  + block_device { 2025-05-30 00:01:22.510296 | orchestrator | 00:01:22.510 STDOUT terraform:  + boot_index = 0 2025-05-30 00:01:22.510329 | orchestrator | 00:01:22.510 STDOUT terraform:  + delete_on_termination = false 2025-05-30 00:01:22.513633 | orchestrator | 00:01:22.510 STDOUT terraform:  + destination_type = "volume" 2025-05-30 00:01:22.513662 | orchestrator | 00:01:22.510 STDOUT terraform:  + multiattach = false 2025-05-30 00:01:22.513667 | orchestrator | 00:01:22.510 STDOUT terraform:  + source_type = "volume" 2025-05-30 00:01:22.513671 | orchestrator | 00:01:22.510 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.513675 | orchestrator | 00:01:22.510 STDOUT terraform:  } 2025-05-30 00:01:22.513679 | orchestrator | 00:01:22.510 STDOUT terraform:  + network { 2025-05-30 00:01:22.513683 | orchestrator | 00:01:22.510 STDOUT terraform:  + access_network = false 2025-05-30 00:01:22.513687 | orchestrator | 00:01:22.510 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-30 00:01:22.513690 | orchestrator | 00:01:22.510 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-30 00:01:22.513706 | orchestrator | 00:01:22.510 STDOUT terraform:  + mac = (known after apply) 2025-05-30 00:01:22.513710 | orchestrator | 00:01:22.511 STDOUT terraform:  + name = (known after apply) 2025-05-30 00:01:22.513714 | orchestrator | 00:01:22.511 STDOUT terraform:  + port = (known after apply) 2025-05-30 00:01:22.513718 | orchestrator | 00:01:22.511 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.513728 | orchestrator | 00:01:22.511 STDOUT terraform:  } 2025-05-30 00:01:22.513733 | orchestrator | 00:01:22.511 STDOUT terraform:  } 2025-05-30 00:01:22.513736 | orchestrator | 00:01:22.511 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-30 00:01:22.513740 | orchestrator | 00:01:22.511 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-30 00:01:22.513744 | orchestrator | 00:01:22.511 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-30 00:01:22.513755 | orchestrator | 00:01:22.511 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-30 00:01:22.513759 | orchestrator | 00:01:22.511 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-30 00:01:22.513763 | orchestrator | 00:01:22.511 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.513767 | orchestrator | 00:01:22.511 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.513771 | orchestrator | 00:01:22.511 STDOUT terraform:  + config_drive = true 2025-05-30 00:01:22.513774 | orchestrator | 00:01:22.512 STDOUT terraform:  + created = (known after apply) 2025-05-30 00:01:22.513778 | orchestrator | 00:01:22.512 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-30 00:01:22.513782 | orchestrator | 00:01:22.512 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-30 00:01:22.513786 | orchestrator | 00:01:22.512 STDOUT terraform:  + force_delete = false 2025-05-30 00:01:22.513790 | orchestrator | 00:01:22.512 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-30 00:01:22.513793 | orchestrator | 00:01:22.512 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.513797 | orchestrator | 00:01:22.512 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.513801 | orchestrator | 00:01:22.512 STDOUT terraform:  + image_name = (known after apply) 2025-05-30 00:01:22.513805 | orchestrator | 00:01:22.512 STDOUT terraform:  + key_pair = "testbed" 2025-05-30 00:01:22.513808 | orchestrator | 00:01:22.512 STDOUT terraform:  + name = "testbed-node-0" 2025-05-30 00:01:22.513812 | orchestrator | 00:01:22.512 STDOUT terraform:  + power_state = "active" 2025-05-30 00:01:22.513821 | orchestrator | 00:01:22.512 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.513825 | orchestrator | 00:01:22.512 STDOUT terraform:  + security_groups = (known after apply) 2025-05-30 00:01:22.513829 | orchestrator | 00:01:22.512 STDOUT terraform:  + stop_before_destroy = false 2025-05-30 00:01:22.513833 | orchestrator | 00:01:22.512 STDOUT terraform:  + updated = (known after apply) 2025-05-30 00:01:22.513842 | orchestrator | 00:01:22.513 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-30 00:01:22.513847 | orchestrator | 00:01:22.513 STDOUT terraform:  + block_device { 2025-05-30 00:01:22.513851 | orchestrator | 00:01:22.513 STDOUT terraform:  + boot_index = 0 2025-05-30 00:01:22.513855 | orchestrator | 00:01:22.513 STDOUT terraform:  + delete_on_termination = false 2025-05-30 00:01:22.513859 | orchestrator | 00:01:22.513 STDOUT terraform:  + destination_type = "volume" 2025-05-30 00:01:22.513865 | orchestrator | 00:01:22.513 STDOUT terraform:  + multiattach = false 2025-05-30 00:01:22.513869 | orchestrator | 00:01:22.513 STDOUT terraform:  + source_type = "volume" 2025-05-30 00:01:22.513873 | orchestrator | 00:01:22.513 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.513876 | orchestrator | 00:01:22.513 STDOUT terraform:  } 2025-05-30 00:01:22.513880 | orchestrator | 00:01:22.513 STDOUT terraform:  + network { 2025-05-30 00:01:22.513884 | orchestrator | 00:01:22.513 STDOUT terraform:  + access_network = false 2025-05-30 00:01:22.513888 | orchestrator | 00:01:22.513 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-30 00:01:22.513892 | orchestrator | 00:01:22.513 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-30 00:01:22.513897 | orchestrator | 00:01:22.513 STDOUT terraform:  + mac = (known after apply) 2025-05-30 00:01:22.513961 | orchestrator | 00:01:22.513 STDOUT terraform:  + name = (known after apply) 2025-05-30 00:01:22.514010 | orchestrator | 00:01:22.513 STDOUT terraform:  + port = (known after apply) 2025-05-30 00:01:22.514099 | orchestrator | 00:01:22.514 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.514130 | orchestrator | 00:01:22.514 STDOUT terraform:  } 2025-05-30 00:01:22.514155 | orchestrator | 00:01:22.514 STDOUT terraform:  } 2025-05-30 00:01:22.514246 | orchestrator | 00:01:22.514 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-30 00:01:22.514322 | orchestrator | 00:01:22.514 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-30 00:01:22.514410 | orchestrator | 00:01:22.514 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-30 00:01:22.514471 | orchestrator | 00:01:22.514 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-30 00:01:22.514544 | orchestrator | 00:01:22.514 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-30 00:01:22.514607 | orchestrator | 00:01:22.514 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.514655 | orchestrator | 00:01:22.514 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.514717 | orchestrator | 00:01:22.514 STDOUT terraform:  + config_drive = true 2025-05-30 00:01:22.518060 | orchestrator | 00:01:22.514 STDOUT terraform:  + created = (known after apply) 2025-05-30 00:01:22.518083 | orchestrator | 00:01:22.517 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-30 00:01:22.518088 | orchestrator | 00:01:22.517 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-30 00:01:22.518092 | orchestrator | 00:01:22.517 STDOUT terraform:  + force_delete = false 2025-05-30 00:01:22.518096 | orchestrator | 00:01:22.517 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-30 00:01:22.518099 | orchestrator | 00:01:22.517 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.518103 | orchestrator | 00:01:22.517 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.518107 | orchestrator | 00:01:22.517 STDOUT terraform:  + image_name = (known after apply) 2025-05-30 00:01:22.518116 | orchestrator | 00:01:22.517 STDOUT terraform:  + key_pair = "testbed" 2025-05-30 00:01:22.518120 | orchestrator | 00:01:22.517 STDOUT terraform:  + name = "testbed-node-1" 2025-05-30 00:01:22.518123 | orchestrator | 00:01:22.517 STDOUT terraform:  + power_state = "active" 2025-05-30 00:01:22.518127 | orchestrator | 00:01:22.517 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.518134 | orchestrator | 00:01:22.518 STDOUT terraform:  + security_groups = (known after apply) 2025-05-30 00:01:22.518138 | orchestrator | 00:01:22.518 STDOUT terraform:  + stop_before_destroy = false 2025-05-30 00:01:22.518142 | orchestrator | 00:01:22.518 STDOUT terraform:  + updated = (known after apply) 2025-05-30 00:01:22.518349 | orchestrator | 00:01:22.518 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-30 00:01:22.518355 | orchestrator | 00:01:22.518 STDOUT terraform:  + block_device { 2025-05-30 00:01:22.518359 | orchestrator | 00:01:22.518 STDOUT terraform:  + boot_index = 0 2025-05-30 00:01:22.518363 | orchestrator | 00:01:22.518 STDOUT terraform:  + delete_on_termination = false 2025-05-30 00:01:22.518367 | orchestrator | 00:01:22.518 STDOUT terraform:  + destination_type = "volume" 2025-05-30 00:01:22.518370 | orchestrator | 00:01:22.518 STDOUT terraform:  + multiattach = false 2025-05-30 00:01:22.518377 | orchestrator | 00:01:22.518 STDOUT terraform:  + source_type = "volume" 2025-05-30 00:01:22.518381 | orchestrator | 00:01:22.518 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.518385 | orchestrator | 00:01:22.518 STDOUT terraform:  } 2025-05-30 00:01:22.518391 | orchestrator | 00:01:22.518 STDOUT terraform:  + network { 2025-05-30 00:01:22.518395 | orchestrator | 00:01:22.518 STDOUT terraform:  + access_network = false 2025-05-30 00:01:22.518400 | orchestrator | 00:01:22.518 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-30 00:01:22.518521 | orchestrator | 00:01:22.518 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-30 00:01:22.518572 | orchestrator | 00:01:22.518 STDOUT terraform:  + mac = (known after apply) 2025-05-30 00:01:22.518582 | orchestrator | 00:01:22.518 STDOUT terraform:  + name = (known after apply) 2025-05-30 00:01:22.518597 | orchestrator | 00:01:22.518 STDOUT terraform:  + port = (known after apply) 2025-05-30 00:01:22.518606 | orchestrator | 00:01:22.518 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.518614 | orchestrator | 00:01:22.518 STDOUT terraform:  } 2025-05-30 00:01:22.518622 | orchestrator | 00:01:22.518 STDOUT terraform:  } 2025-05-30 00:01:22.518633 | orchestrator | 00:01:22.518 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-30 00:01:22.518644 | orchestrator | 00:01:22.518 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-30 00:01:22.518808 | orchestrator | 00:01:22.518 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-30 00:01:22.518846 | orchestrator | 00:01:22.518 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-30 00:01:22.518869 | orchestrator | 00:01:22.518 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-30 00:01:22.518877 | orchestrator | 00:01:22.518 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.518885 | orchestrator | 00:01:22.518 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.518893 | orchestrator | 00:01:22.518 STDOUT terraform:  + config_drive = true 2025-05-30 00:01:22.518904 | orchestrator | 00:01:22.518 STDOUT terraform:  + created = (known after apply) 2025-05-30 00:01:22.518912 | orchestrator | 00:01:22.518 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-30 00:01:22.518920 | orchestrator | 00:01:22.518 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-30 00:01:22.518930 | orchestrator | 00:01:22.518 STDOUT terraform:  + force_delete = false 2025-05-30 00:01:22.518967 | orchestrator | 00:01:22.518 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-30 00:01:22.518996 | orchestrator | 00:01:22.518 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.519023 | orchestrator | 00:01:22.518 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.519063 | orchestrator | 00:01:22.519 STDOUT terraform:  + image_name = (known after apply) 2025-05-30 00:01:22.519075 | orchestrator | 00:01:22.519 STDOUT terraform:  + key_pair = "testbed" 2025-05-30 00:01:22.519114 | orchestrator | 00:01:22.519 STDOUT terraform:  + name = "testbed-node-2" 2025-05-30 00:01:22.519126 | orchestrator | 00:01:22.519 STDOUT terraform:  + power_state = "active" 2025-05-30 00:01:22.519171 | orchestrator | 00:01:22.519 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.519199 | orchestrator | 00:01:22.519 STDOUT terraform:  + security_groups = (known after apply) 2025-05-30 00:01:22.519210 | orchestrator | 00:01:22.519 STDOUT terraform:  + stop_before_destroy = false 2025-05-30 00:01:22.519255 | orchestrator | 00:01:22.519 STDOUT terraform:  + updated = (known after apply) 2025-05-30 00:01:22.519303 | orchestrator | 00:01:22.519 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-30 00:01:22.519316 | orchestrator | 00:01:22.519 STDOUT terraform:  + block_device { 2025-05-30 00:01:22.519326 | orchestrator | 00:01:22.519 STDOUT terraform:  + boot_index = 0 2025-05-30 00:01:22.519365 | orchestrator | 00:01:22.519 STDOUT terraform:  + delete_on_termination = false 2025-05-30 00:01:22.519393 | orchestrator | 00:01:22.519 STDOUT terraform:  + destination_type = "volume" 2025-05-30 00:01:22.519420 | orchestrator | 00:01:22.519 STDOUT terraform:  + multiattach = false 2025-05-30 00:01:22.519455 | orchestrator | 00:01:22.519 STDOUT terraform:  + source_type = "volume" 2025-05-30 00:01:22.519482 | orchestrator | 00:01:22.519 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.519491 | orchestrator | 00:01:22.519 STDOUT terraform:  } 2025-05-30 00:01:22.519502 | orchestrator | 00:01:22.519 STDOUT terraform:  + network { 2025-05-30 00:01:22.519512 | orchestrator | 00:01:22.519 STDOUT terraform:  + access_network = false 2025-05-30 00:01:22.519545 | orchestrator | 00:01:22.519 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-30 00:01:22.519572 | orchestrator | 00:01:22.519 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-30 00:01:22.519598 | orchestrator | 00:01:22.519 STDOUT terraform:  + mac = (known after apply) 2025-05-30 00:01:22.519625 | orchestrator | 00:01:22.519 STDOUT terraform:  + name = (known after apply) 2025-05-30 00:01:22.519660 | orchestrator | 00:01:22.519 STDOUT terraform:  + port = (known after apply) 2025-05-30 00:01:22.519687 | orchestrator | 00:01:22.519 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.519713 | orchestrator | 00:01:22.519 STDOUT terraform:  } 2025-05-30 00:01:22.519724 | orchestrator | 00:01:22.519 STDOUT terraform:  } 2025-05-30 00:01:22.519770 | orchestrator | 00:01:22.519 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-30 00:01:22.519814 | orchestrator | 00:01:22.519 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-30 00:01:22.519843 | orchestrator | 00:01:22.519 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-30 00:01:22.519878 | orchestrator | 00:01:22.519 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-30 00:01:22.519915 | orchestrator | 00:01:22.519 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-30 00:01:22.519951 | orchestrator | 00:01:22.519 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.519963 | orchestrator | 00:01:22.519 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.519978 | orchestrator | 00:01:22.519 STDOUT terraform:  + config_drive = true 2025-05-30 00:01:22.520009 | orchestrator | 00:01:22.519 STDOUT terraform:  + created = (known after apply) 2025-05-30 00:01:22.520047 | orchestrator | 00:01:22.520 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-30 00:01:22.520083 | orchestrator | 00:01:22.520 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-30 00:01:22.520095 | orchestrator | 00:01:22.520 STDOUT terraform:  + force_delete = false 2025-05-30 00:01:22.520130 | orchestrator | 00:01:22.520 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-30 00:01:22.520165 | orchestrator | 00:01:22.520 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.520199 | orchestrator | 00:01:22.520 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.520234 | orchestrator | 00:01:22.520 STDOUT terraform:  + image_name = (known after apply) 2025-05-30 00:01:22.520245 | orchestrator | 00:01:22.520 STDOUT terraform:  + key_pair = "testbed" 2025-05-30 00:01:22.520286 | orchestrator | 00:01:22.520 STDOUT terraform:  + name = "testbed-node-3" 2025-05-30 00:01:22.520298 | orchestrator | 00:01:22.520 STDOUT terraform:  + power_state = "active" 2025-05-30 00:01:22.520341 | orchestrator | 00:01:22.520 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.520369 | orchestrator | 00:01:22.520 STDOUT terraform:  + security_groups = (known after apply) 2025-05-30 00:01:22.520381 | orchestrator | 00:01:22.520 STDOUT terraform:  + stop_before_destroy = false 2025-05-30 00:01:22.520426 | orchestrator | 00:01:22.520 STDOUT terraform:  + updated = (known after apply) 2025-05-30 00:01:22.520471 | orchestrator | 00:01:22.520 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-30 00:01:22.520483 | orchestrator | 00:01:22.520 STDOUT terraform:  + block_device { 2025-05-30 00:01:22.520494 | orchestrator | 00:01:22.520 STDOUT terraform:  + boot_index = 0 2025-05-30 00:01:22.520533 | orchestrator | 00:01:22.520 STDOUT terraform:  + delete_on_termination = false 2025-05-30 00:01:22.520561 | orchestrator | 00:01:22.520 STDOUT terraform:  + destination_type = "volume" 2025-05-30 00:01:22.520596 | orchestrator | 00:01:22.520 STDOUT terraform:  + multiattach = false 2025-05-30 00:01:22.520607 | orchestrator | 00:01:22.520 STDOUT terraform:  + source_type = "volume" 2025-05-30 00:01:22.520652 | orchestrator | 00:01:22.520 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.520664 | orchestrator | 00:01:22.520 STDOUT terraform:  } 2025-05-30 00:01:22.520672 | orchestrator | 00:01:22.520 STDOUT terraform:  + network { 2025-05-30 00:01:22.520682 | orchestrator | 00:01:22.520 STDOUT terraform:  + access_network = false 2025-05-30 00:01:22.520722 | orchestrator | 00:01:22.520 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-30 00:01:22.520750 | orchestrator | 00:01:22.520 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-30 00:01:22.520778 | orchestrator | 00:01:22.520 STDOUT terraform:  + mac = (known after apply) 2025-05-30 00:01:22.520805 | orchestrator | 00:01:22.520 STDOUT terraform:  + name = (known after apply) 2025-05-30 00:01:22.520832 | orchestrator | 00:01:22.520 STDOUT terraform:  + port = (known after apply) 2025-05-30 00:01:22.520866 | orchestrator | 00:01:22.520 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.520876 | orchestrator | 00:01:22.520 STDOUT terraform:  } 2025-05-30 00:01:22.520886 | orchestrator | 00:01:22.520 STDOUT terraform:  } 2025-05-30 00:01:22.520926 | orchestrator | 00:01:22.520 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-30 00:01:22.520966 | orchestrator | 00:01:22.520 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-30 00:01:22.521002 | orchestrator | 00:01:22.520 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-30 00:01:22.521029 | orchestrator | 00:01:22.520 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-30 00:01:22.521064 | orchestrator | 00:01:22.521 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-30 00:01:22.521100 | orchestrator | 00:01:22.521 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.521112 | orchestrator | 00:01:22.521 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.521138 | orchestrator | 00:01:22.521 STDOUT terraform:  + config_drive = true 2025-05-30 00:01:22.521171 | orchestrator | 00:01:22.521 STDOUT terraform:  + created = (known after apply) 2025-05-30 00:01:22.521206 | orchestrator | 00:01:22.521 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-30 00:01:22.521224 | orchestrator | 00:01:22.521 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-30 00:01:22.521251 | orchestrator | 00:01:22.521 STDOUT terraform:  + force_delete = false 2025-05-30 00:01:22.521285 | orchestrator | 00:01:22.521 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-30 00:01:22.521313 | orchestrator | 00:01:22.521 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.521348 | orchestrator | 00:01:22.521 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.521376 | orchestrator | 00:01:22.521 STDOUT terraform:  + image_name = (known after apply) 2025-05-30 00:01:22.521388 | orchestrator | 00:01:22.521 STDOUT terraform:  + key_pair = "testbed" 2025-05-30 00:01:22.521430 | orchestrator | 00:01:22.521 STDOUT terraform:  + name = "testbed-node-4" 2025-05-30 00:01:22.521442 | orchestrator | 00:01:22.521 STDOUT terraform:  + power_state = "active" 2025-05-30 00:01:22.521486 | orchestrator | 00:01:22.521 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.521514 | orchestrator | 00:01:22.521 STDOUT terraform:  + security_groups = (known after apply) 2025-05-30 00:01:22.521526 | orchestrator | 00:01:22.521 STDOUT terraform:  + stop_before_destroy = false 2025-05-30 00:01:22.521569 | orchestrator | 00:01:22.521 STDOUT terraform:  + updated = (known after apply) 2025-05-30 00:01:22.521619 | orchestrator | 00:01:22.521 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-30 00:01:22.521631 | orchestrator | 00:01:22.521 STDOUT terraform:  + block_device { 2025-05-30 00:01:22.521641 | orchestrator | 00:01:22.521 STDOUT terraform:  + boot_index = 0 2025-05-30 00:01:22.521678 | orchestrator | 00:01:22.521 STDOUT terraform:  + delete_on_termination = false 2025-05-30 00:01:22.521747 | orchestrator | 00:01:22.521 STDOUT terraform:  + destination_type = "volume" 2025-05-30 00:01:22.521764 | orchestrator | 00:01:22.521 STDOUT terraform:  + multiattach = false 2025-05-30 00:01:22.521774 | orchestrator | 00:01:22.521 STDOUT terraform:  + source_type = "volume" 2025-05-30 00:01:22.521785 | orchestrator | 00:01:22.521 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.521795 | orchestrator | 00:01:22.521 STDOUT terraform:  } 2025-05-30 00:01:22.521805 | orchestrator | 00:01:22.521 STDOUT terraform:  + network { 2025-05-30 00:01:22.521832 | orchestrator | 00:01:22.521 STDOUT terraform:  + access_network = false 2025-05-30 00:01:22.521859 | orchestrator | 00:01:22.521 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-30 00:01:22.521886 | orchestrator | 00:01:22.521 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-30 00:01:22.521914 | orchestrator | 00:01:22.521 STDOUT terraform:  + mac = (known after apply) 2025-05-30 00:01:22.521941 | orchestrator | 00:01:22.521 STDOUT terraform:  + name = (known after apply) 2025-05-30 00:01:22.521968 | orchestrator | 00:01:22.521 STDOUT terraform:  + port = (known after apply) 2025-05-30 00:01:22.522005 | orchestrator | 00:01:22.521 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.522041 | orchestrator | 00:01:22.521 STDOUT terraform:  } 2025-05-30 00:01:22.522054 | orchestrator | 00:01:22.521 STDOUT terraform:  } 2025-05-30 00:01:22.522065 | orchestrator | 00:01:22.522 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-30 00:01:22.522111 | orchestrator | 00:01:22.522 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-30 00:01:22.522144 | orchestrator | 00:01:22.522 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-30 00:01:22.522179 | orchestrator | 00:01:22.522 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-30 00:01:22.522213 | orchestrator | 00:01:22.522 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-30 00:01:22.522248 | orchestrator | 00:01:22.522 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.522275 | orchestrator | 00:01:22.522 STDOUT terraform:  + availability_zone = "nova" 2025-05-30 00:01:22.522287 | orchestrator | 00:01:22.522 STDOUT terraform:  + config_drive = true 2025-05-30 00:01:22.522323 | orchestrator | 00:01:22.522 STDOUT terraform:  + created = (known after apply) 2025-05-30 00:01:22.522358 | orchestrator | 00:01:22.522 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-30 00:01:22.522394 | orchestrator | 00:01:22.522 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-30 00:01:22.522418 | orchestrator | 00:01:22.522 STDOUT terraform:  + force_delete = false 2025-05-30 00:01:22.522441 | orchestrator | 00:01:22.522 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-30 00:01:22.522473 | orchestrator | 00:01:22.522 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.522524 | orchestrator | 00:01:22.522 STDOUT terraform:  + image_id = (known after apply) 2025-05-30 00:01:22.522537 | orchestrator | 00:01:22.522 STDOUT terraform:  + image_name = (known after apply) 2025-05-30 00:01:22.522557 | orchestrator | 00:01:22.522 STDOUT terraform:  + key_pair = "testbed" 2025-05-30 00:01:22.522591 | orchestrator | 00:01:22.522 STDOUT terraform:  + name = "testbed-node-5" 2025-05-30 00:01:22.522604 | orchestrator | 00:01:22.522 STDOUT terraform:  + power_state = "active" 2025-05-30 00:01:22.522645 | orchestrator | 00:01:22.522 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.522679 | orchestrator | 00:01:22.522 STDOUT terraform:  + security_groups = (known after apply) 2025-05-30 00:01:22.522712 | orchestrator | 00:01:22.522 STDOUT terraform:  + stop_before_destroy = false 2025-05-30 00:01:22.522758 | orchestrator | 00:01:22.522 STDOUT terraform:  + updated = (known after apply) 2025-05-30 00:01:22.522807 | orchestrator | 00:01:22.522 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-30 00:01:22.522827 | orchestrator | 00:01:22.522 STDOUT terraform:  + block_device { 2025-05-30 00:01:22.522836 | orchestrator | 00:01:22.522 STDOUT terraform:  + boot_index = 0 2025-05-30 00:01:22.522863 | orchestrator | 00:01:22.522 STDOUT terraform:  + delete_on_termination = false 2025-05-30 00:01:22.522888 | orchestrator | 00:01:22.522 STDOUT terraform:  + destination_type = "volume" 2025-05-30 00:01:22.522908 | orchestrator | 00:01:22.522 STDOUT terraform:  + multiattach = false 2025-05-30 00:01:22.522942 | orchestrator | 00:01:22.522 STDOUT terraform:  + source_type = "volume" 2025-05-30 00:01:22.522981 | orchestrator | 00:01:22.522 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.522993 | orchestrator | 00:01:22.522 STDOUT terraform:  } 2025-05-30 00:01:22.523003 | orchestrator | 00:01:22.522 STDOUT terraform:  + network { 2025-05-30 00:01:22.523014 | orchestrator | 00:01:22.522 STDOUT terraform:  + access_network = false 2025-05-30 00:01:22.523048 | orchestrator | 00:01:22.523 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-30 00:01:22.523077 | orchestrator | 00:01:22.523 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-30 00:01:22.523110 | orchestrator | 00:01:22.523 STDOUT terraform:  + mac = (known after apply) 2025-05-30 00:01:22.523146 | orchestrator | 00:01:22.523 STDOUT terraform:  + name = (known after apply) 2025-05-30 00:01:22.523176 | orchestrator | 00:01:22.523 STDOUT terraform:  + port = (known after apply) 2025-05-30 00:01:22.523208 | orchestrator | 00:01:22.523 STDOUT terraform:  + uuid = (known after apply) 2025-05-30 00:01:22.523219 | orchestrator | 00:01:22.523 STDOUT terraform:  } 2025-05-30 00:01:22.523230 | orchestrator | 00:01:22.523 STDOUT terraform:  } 2025-05-30 00:01:22.523261 | orchestrator | 00:01:22.523 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-30 00:01:22.523296 | orchestrator | 00:01:22.523 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-30 00:01:22.523324 | orchestrator | 00:01:22.523 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-30 00:01:22.523352 | orchestrator | 00:01:22.523 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.523364 | orchestrator | 00:01:22.523 STDOUT terraform:  + name = "testbed" 2025-05-30 00:01:22.523394 | orchestrator | 00:01:22.523 STDOUT terraform:  + private_key = (sensitive value) 2025-05-30 00:01:22.523421 | orchestrator | 00:01:22.523 STDOUT terraform:  + public_key = (known after apply) 2025-05-30 00:01:22.523449 | orchestrator | 00:01:22.523 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.523478 | orchestrator | 00:01:22.523 STDOUT terraform:  + user_id = (known after apply) 2025-05-30 00:01:22.523496 | orchestrator | 00:01:22.523 STDOUT terraform:  } 2025-05-30 00:01:22.523544 | orchestrator | 00:01:22.523 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-30 00:01:22.523593 | orchestrator | 00:01:22.523 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-30 00:01:22.523613 | orchestrator | 00:01:22.523 STDOUT terraform:  + device = (known after apply) 2025-05-30 00:01:22.523632 | orchestrator | 00:01:22.523 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.523674 | orchestrator | 00:01:22.523 STDOUT terraform:  + instance_id = (known after apply) 2025-05-30 00:01:22.523714 | orchestrator | 00:01:22.523 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.523735 | orchestrator | 00:01:22.523 STDOUT terraform:  + volume_id = (known after apply) 2025-05-30 00:01:22.523757 | orchestrator | 00:01:22.523 STDOUT terraform:  } 2025-05-30 00:01:22.523768 | orchestrator | 00:01:22.523 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-30 00:01:22.523828 | orchestrator | 00:01:22.523 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-30 00:01:22.523841 | orchestrator | 00:01:22.523 STDOUT terraform:  + device = (known after apply) 2025-05-30 00:01:22.523873 | orchestrator | 00:01:22.523 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.523901 | orchestrator | 00:01:22.523 STDOUT terraform:  + instance_id = (known after apply) 2025-05-30 00:01:22.523930 | orchestrator | 00:01:22.523 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.523958 | orchestrator | 00:01:22.523 STDOUT terraform:  + volume_id = (known after apply) 2025-05-30 00:01:22.523970 | orchestrator | 00:01:22.523 STDOUT terraform:  } 2025-05-30 00:01:22.524015 | orchestrator | 00:01:22.523 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-30 00:01:22.524062 | orchestrator | 00:01:22.524 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-30 00:01:22.524089 | orchestrator | 00:01:22.524 STDOUT terraform:  + device = (known after apply) 2025-05-30 00:01:22.524117 | orchestrator | 00:01:22.524 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.524146 | orchestrator | 00:01:22.524 STDOUT terraform:  + instance_id = (known after apply) 2025-05-30 00:01:22.524174 | orchestrator | 00:01:22.524 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.524201 | orchestrator | 00:01:22.524 STDOUT terraform:  + volume_id = (known after apply) 2025-05-30 00:01:22.524213 | orchestrator | 00:01:22.524 STDOUT terraform:  } 2025-05-30 00:01:22.524259 | orchestrator | 00:01:22.524 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-30 00:01:22.524306 | orchestrator | 00:01:22.524 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-30 00:01:22.524333 | orchestrator | 00:01:22.524 STDOUT terraform:  + device = (known after apply) 2025-05-30 00:01:22.524362 | orchestrator | 00:01:22.524 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.524390 | orchestrator | 00:01:22.524 STDOUT terraform:  + instance_id = (known after apply) 2025-05-30 00:01:22.524454 | orchestrator | 00:01:22.524 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.524465 | orchestrator | 00:01:22.524 STDOUT terraform:  + volume_id = (known after apply) 2025-05-30 00:01:22.524473 | orchestrator | 00:01:22.524 STDOUT terraform:  } 2025-05-30 00:01:22.524492 | orchestrator | 00:01:22.524 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-30 00:01:22.524543 | orchestrator | 00:01:22.524 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-30 00:01:22.524571 | orchestrator | 00:01:22.524 STDOUT terraform:  + device = (known after apply) 2025-05-30 00:01:22.524598 | orchestrator | 00:01:22.524 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.524617 | orchestrator | 00:01:22.524 STDOUT terraform:  + instance_id = (known after apply) 2025-05-30 00:01:22.524649 | orchestrator | 00:01:22.524 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.524681 | orchestrator | 00:01:22.524 STDOUT terraform:  + volume_id = (known after apply) 2025-05-30 00:01:22.524715 | orchestrator | 00:01:22.524 STDOUT terraform:  } 2025-05-30 00:01:22.524744 | orchestrator | 00:01:22.524 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-30 00:01:22.524837 | orchestrator | 00:01:22.524 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-30 00:01:22.524866 | orchestrator | 00:01:22.524 STDOUT terraform:  + device = (known after apply) 2025-05-30 00:01:22.524878 | orchestrator | 00:01:22.524 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.524884 | orchestrator | 00:01:22.524 STDOUT terraform:  + instance_id = (known after apply) 2025-05-30 00:01:22.524893 | orchestrator | 00:01:22.524 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.524917 | orchestrator | 00:01:22.524 STDOUT terraform:  + volume_id = (known after apply) 2025-05-30 00:01:22.524926 | orchestrator | 00:01:22.524 STDOUT terraform:  } 2025-05-30 00:01:22.524971 | orchestrator | 00:01:22.524 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-30 00:01:22.525017 | orchestrator | 00:01:22.524 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-30 00:01:22.525044 | orchestrator | 00:01:22.525 STDOUT terraform:  + device = (known after apply) 2025-05-30 00:01:22.525072 | orchestrator | 00:01:22.525 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.525100 | orchestrator | 00:01:22.525 STDOUT terraform:  + instance_id = (known after apply) 2025-05-30 00:01:22.525129 | orchestrator | 00:01:22.525 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.525155 | orchestrator | 00:01:22.525 STDOUT terraform:  + volume_id = (known after apply) 2025-05-30 00:01:22.525164 | orchestrator | 00:01:22.525 STDOUT terraform:  } 2025-05-30 00:01:22.525213 | orchestrator | 00:01:22.525 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-30 00:01:22.525259 | orchestrator | 00:01:22.525 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-30 00:01:22.525286 | orchestrator | 00:01:22.525 STDOUT terraform:  + device = (known after apply) 2025-05-30 00:01:22.525325 | orchestrator | 00:01:22.525 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.525334 | orchestrator | 00:01:22.525 STDOUT terraform:  + instance_id = (known after apply) 2025-05-30 00:01:22.525366 | orchestrator | 00:01:22.525 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.525393 | orchestrator | 00:01:22.525 STDOUT terraform:  + volume_id = (known after apply) 2025-05-30 00:01:22.525403 | orchestrator | 00:01:22.525 STDOUT terraform:  } 2025-05-30 00:01:22.525450 | orchestrator | 00:01:22.525 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-30 00:01:22.525497 | orchestrator | 00:01:22.525 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-30 00:01:22.525524 | orchestrator | 00:01:22.525 STDOUT terraform:  + device = (known after apply) 2025-05-30 00:01:22.525552 | orchestrator | 00:01:22.525 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.525579 | orchestrator | 00:01:22.525 STDOUT terraform:  + instance_id = (known after apply) 2025-05-30 00:01:22.525607 | orchestrator | 00:01:22.525 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.525636 | orchestrator | 00:01:22.525 STDOUT terraform:  + volume_id = (known after apply) 2025-05-30 00:01:22.525646 | orchestrator | 00:01:22.525 STDOUT terraform:  } 2025-05-30 00:01:22.525710 | orchestrator | 00:01:22.525 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-30 00:01:22.525761 | orchestrator | 00:01:22.525 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-30 00:01:22.525812 | orchestrator | 00:01:22.525 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-30 00:01:22.525820 | orchestrator | 00:01:22.525 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-30 00:01:22.525840 | orchestrator | 00:01:22.525 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.525867 | orchestrator | 00:01:22.525 STDOUT terraform:  + port_id = (known after apply) 2025-05-30 00:01:22.525894 | orchestrator | 00:01:22.525 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.525903 | orchestrator | 00:01:22.525 STDOUT terraform:  } 2025-05-30 00:01:22.525948 | orchestrator | 00:01:22.525 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-30 00:01:22.525992 | orchestrator | 00:01:22.525 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-30 00:01:22.526030 | orchestrator | 00:01:22.525 STDOUT terraform:  + address = (known after apply) 2025-05-30 00:01:22.526055 | orchestrator | 00:01:22.526 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.526080 | orchestrator | 00:01:22.526 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-30 00:01:22.526104 | orchestrator | 00:01:22.526 STDOUT terraform:  + dns_name = (known after apply) 2025-05-30 00:01:22.526128 | orchestrator | 00:01:22.526 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-30 00:01:22.526153 | orchestrator | 00:01:22.526 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.526173 | orchestrator | 00:01:22.526 STDOUT terraform:  + pool = "public" 2025-05-30 00:01:22.526197 | orchestrator | 00:01:22.526 STDOUT terraform:  + port_id = (known after apply) 2025-05-30 00:01:22.526221 | orchestrator | 00:01:22.526 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.526249 | orchestrator | 00:01:22.526 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-30 00:01:22.526269 | orchestrator | 00:01:22.526 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.526275 | orchestrator | 00:01:22.526 STDOUT terraform:  } 2025-05-30 00:01:22.526321 | orchestrator | 00:01:22.526 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-30 00:01:22.526364 | orchestrator | 00:01:22.526 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-30 00:01:22.526403 | orchestrator | 00:01:22.526 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-30 00:01:22.526437 | orchestrator | 00:01:22.526 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.526458 | orchestrator | 00:01:22.526 STDOUT terraform:  + availability_zone_hints = [ 2025-05-30 00:01:22.526464 | orchestrator | 00:01:22.526 STDOUT terraform:  + "nova", 2025-05-30 00:01:22.526482 | orchestrator | 00:01:22.526 STDOUT terraform:  ] 2025-05-30 00:01:22.526519 | orchestrator | 00:01:22.526 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-30 00:01:22.526554 | orchestrator | 00:01:22.526 STDOUT terraform:  + external = (known after apply) 2025-05-30 00:01:22.526591 | orchestrator | 00:01:22.526 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.526628 | orchestrator | 00:01:22.526 STDOUT terraform:  + mtu = (known after apply) 2025-05-30 00:01:22.526667 | orchestrator | 00:01:22.526 STDOUT terraform:  + name = "net-testbed-management" 2025-05-30 00:01:22.526714 | orchestrator | 00:01:22.526 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-30 00:01:22.526753 | orchestrator | 00:01:22.526 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-30 00:01:22.526788 | orchestrator | 00:01:22.526 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.526823 | orchestrator | 00:01:22.526 STDOUT terraform:  + shared = (known after apply) 2025-05-30 00:01:22.526859 | orchestrator | 00:01:22.526 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.526894 | orchestrator | 00:01:22.526 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-30 00:01:22.526919 | orchestrator | 00:01:22.526 STDOUT terraform:  + segments (known after apply) 2025-05-30 00:01:22.526925 | orchestrator | 00:01:22.526 STDOUT terraform:  } 2025-05-30 00:01:22.526973 | orchestrator | 00:01:22.526 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-30 00:01:22.527018 | orchestrator | 00:01:22.526 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-30 00:01:22.527056 | orchestrator | 00:01:22.527 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-30 00:01:22.527091 | orchestrator | 00:01:22.527 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-30 00:01:22.527129 | orchestrator | 00:01:22.527 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-30 00:01:22.527163 | orchestrator | 00:01:22.527 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.527196 | orchestrator | 00:01:22.527 STDOUT terraform:  + device_id = (known after apply) 2025-05-30 00:01:22.527231 | orchestrator | 00:01:22.527 STDOUT terraform:  + device_owner = (known after apply) 2025-05-30 00:01:22.527265 | orchestrator | 00:01:22.527 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-30 00:01:22.527301 | orchestrator | 00:01:22.527 STDOUT terraform:  + dns_name = (known after apply) 2025-05-30 00:01:22.527337 | orchestrator | 00:01:22.527 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.527373 | orchestrator | 00:01:22.527 STDOUT terraform:  + mac_address = (known after apply) 2025-05-30 00:01:22.527409 | orchestrator | 00:01:22.527 STDOUT terraform:  + network_id = (known after apply) 2025-05-30 00:01:22.527443 | orchestrator | 00:01:22.527 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-30 00:01:22.527479 | orchestrator | 00:01:22.527 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-30 00:01:22.527514 | orchestrator | 00:01:22.527 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.527548 | orchestrator | 00:01:22.527 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-30 00:01:22.527583 | orchestrator | 00:01:22.527 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.527606 | orchestrator | 00:01:22.527 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.527634 | orchestrator | 00:01:22.527 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-30 00:01:22.527644 | orchestrator | 00:01:22.527 STDOUT terraform:  } 2025-05-30 00:01:22.527653 | orchestrator | 00:01:22.527 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.527689 | orchestrator | 00:01:22.527 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-30 00:01:22.527714 | orchestrator | 00:01:22.527 STDOUT terraform:  } 2025-05-30 00:01:22.527720 | orchestrator | 00:01:22.527 STDOUT terraform:  + binding (known after apply) 2025-05-30 00:01:22.527740 | orchestrator | 00:01:22.527 STDOUT terraform:  + fixed_ip { 2025-05-30 00:01:22.527759 | orchestrator | 00:01:22.527 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-30 00:01:22.527854 | orchestrator | 00:01:22.527 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-30 00:01:22.527885 | orchestrator | 00:01:22.527 STDOUT terraform:  } 2025-05-30 00:01:22.527897 | orchestrator | 00:01:22.527 STDOUT terraform:  } 2025-05-30 00:01:22.527912 | orchestrator | 00:01:22.527 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-30 00:01:22.527922 | orchestrator | 00:01:22.527 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-30 00:01:22.527932 | orchestrator | 00:01:22.527 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-30 00:01:22.527960 | orchestrator | 00:01:22.527 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-30 00:01:22.527992 | orchestrator | 00:01:22.527 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-30 00:01:22.528028 | orchestrator | 00:01:22.527 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.528064 | orchestrator | 00:01:22.528 STDOUT terraform:  + device_id = (known after apply) 2025-05-30 00:01:22.528099 | orchestrator | 00:01:22.528 STDOUT terraform:  + device_owner = (known after apply) 2025-05-30 00:01:22.528134 | orchestrator | 00:01:22.528 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-30 00:01:22.528169 | orchestrator | 00:01:22.528 STDOUT terraform:  + dns_name = (known after apply) 2025-05-30 00:01:22.528203 | orchestrator | 00:01:22.528 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.528238 | orchestrator | 00:01:22.528 STDOUT terraform:  + mac_address = (known after apply) 2025-05-30 00:01:22.528274 | orchestrator | 00:01:22.528 STDOUT terraform:  + network_id = (known after apply) 2025-05-30 00:01:22.528310 | orchestrator | 00:01:22.528 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-30 00:01:22.528342 | orchestrator | 00:01:22.528 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-30 00:01:22.528376 | orchestrator | 00:01:22.528 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.528410 | orchestrator | 00:01:22.528 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-30 00:01:22.528445 | orchestrator | 00:01:22.528 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.528457 | orchestrator | 00:01:22.528 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.528489 | orchestrator | 00:01:22.528 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-30 00:01:22.528501 | orchestrator | 00:01:22.528 STDOUT terraform:  } 2025-05-30 00:01:22.528511 | orchestrator | 00:01:22.528 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.528545 | orchestrator | 00:01:22.528 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-30 00:01:22.528556 | orchestrator | 00:01:22.528 STDOUT terraform:  } 2025-05-30 00:01:22.528566 | orchestrator | 00:01:22.528 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.528599 | orchestrator | 00:01:22.528 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-30 00:01:22.528610 | orchestrator | 00:01:22.528 STDOUT terraform:  } 2025-05-30 00:01:22.528620 | orchestrator | 00:01:22.528 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.528653 | orchestrator | 00:01:22.528 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-30 00:01:22.528664 | orchestrator | 00:01:22.528 STDOUT terraform:  } 2025-05-30 00:01:22.528674 | orchestrator | 00:01:22.528 STDOUT terraform:  + binding (known after apply) 2025-05-30 00:01:22.528684 | orchestrator | 00:01:22.528 STDOUT terraform:  + fixed_ip { 2025-05-30 00:01:22.528735 | orchestrator | 00:01:22.528 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-30 00:01:22.528762 | orchestrator | 00:01:22.528 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-30 00:01:22.528773 | orchestrator | 00:01:22.528 STDOUT terraform:  } 2025-05-30 00:01:22.528781 | orchestrator | 00:01:22.528 STDOUT terraform:  } 2025-05-30 00:01:22.528827 | orchestrator | 00:01:22.528 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-30 00:01:22.528871 | orchestrator | 00:01:22.528 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-30 00:01:22.528906 | orchestrator | 00:01:22.528 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-30 00:01:22.528941 | orchestrator | 00:01:22.528 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-30 00:01:22.528973 | orchestrator | 00:01:22.528 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-30 00:01:22.529015 | orchestrator | 00:01:22.528 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.529042 | orchestrator | 00:01:22.528 STDOUT terraform:  + device_id = (known after apply) 2025-05-30 00:01:22.529078 | orchestrator | 00:01:22.529 STDOUT terraform:  + device_owner = (known after apply) 2025-05-30 00:01:22.529117 | orchestrator | 00:01:22.529 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-30 00:01:22.529149 | orchestrator | 00:01:22.529 STDOUT terraform:  + dns_name = (known after apply) 2025-05-30 00:01:22.529185 | orchestrator | 00:01:22.529 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.529221 | orchestrator | 00:01:22.529 STDOUT terraform:  + mac_address = (known after apply) 2025-05-30 00:01:22.529257 | orchestrator | 00:01:22.529 STDOUT terraform:  + network_id = (known after apply) 2025-05-30 00:01:22.529292 | orchestrator | 00:01:22.529 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-30 00:01:22.529329 | orchestrator | 00:01:22.529 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-30 00:01:22.529364 | orchestrator | 00:01:22.529 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.529399 | orchestrator | 00:01:22.529 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-30 00:01:22.529450 | orchestrator | 00:01:22.529 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.529461 | orchestrator | 00:01:22.529 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.529496 | orchestrator | 00:01:22.529 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-30 00:01:22.529507 | orchestrator | 00:01:22.529 STDOUT terraform:  } 2025-05-30 00:01:22.529517 | orchestrator | 00:01:22.529 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.529551 | orchestrator | 00:01:22.529 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-30 00:01:22.529563 | orchestrator | 00:01:22.529 STDOUT terraform:  } 2025-05-30 00:01:22.529573 | orchestrator | 00:01:22.529 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.529606 | orchestrator | 00:01:22.529 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-30 00:01:22.529617 | orchestrator | 00:01:22.529 STDOUT terraform:  } 2025-05-30 00:01:22.529626 | orchestrator | 00:01:22.529 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.529659 | orchestrator | 00:01:22.529 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-30 00:01:22.529670 | orchestrator | 00:01:22.529 STDOUT terraform:  } 2025-05-30 00:01:22.529680 | orchestrator | 00:01:22.529 STDOUT terraform:  + binding (known after apply) 2025-05-30 00:01:22.529731 | orchestrator | 00:01:22.529 STDOUT terraform:  + fixed_ip { 2025-05-30 00:01:22.529743 | orchestrator | 00:01:22.529 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-30 00:01:22.529780 | orchestrator | 00:01:22.529 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-30 00:01:22.529791 | orchestrator | 00:01:22.529 STDOUT terraform:  } 2025-05-30 00:01:22.529804 | orchestrator | 00:01:22.529 STDOUT terraform:  } 2025-05-30 00:01:22.529846 | orchestrator | 00:01:22.529 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-30 00:01:22.529891 | orchestrator | 00:01:22.529 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-30 00:01:22.529927 | orchestrator | 00:01:22.529 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-30 00:01:22.529961 | orchestrator | 00:01:22.529 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-30 00:01:22.530003 | orchestrator | 00:01:22.529 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-30 00:01:22.530046 | orchestrator | 00:01:22.529 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.530067 | orchestrator | 00:01:22.530 STDOUT terraform:  + device_id = (known after apply) 2025-05-30 00:01:22.530111 | orchestrator | 00:01:22.530 STDOUT terraform:  + device_owner = (known after apply) 2025-05-30 00:01:22.530143 | orchestrator | 00:01:22.530 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-30 00:01:22.530179 | orchestrator | 00:01:22.530 STDOUT terraform:  + dns_name = (known after apply) 2025-05-30 00:01:22.530228 | orchestrator | 00:01:22.530 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.530240 | orchestrator | 00:01:22.530 STDOUT terraform:  + mac_address = (known after apply) 2025-05-30 00:01:22.530281 | orchestrator | 00:01:22.530 STDOUT terraform:  + network_id = (known after apply) 2025-05-30 00:01:22.530315 | orchestrator | 00:01:22.530 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-30 00:01:22.530356 | orchestrator | 00:01:22.530 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-30 00:01:22.530388 | orchestrator | 00:01:22.530 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.530422 | orchestrator | 00:01:22.530 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-30 00:01:22.530457 | orchestrator | 00:01:22.530 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.530468 | orchestrator | 00:01:22.530 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.530502 | orchestrator | 00:01:22.530 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-30 00:01:22.530519 | orchestrator | 00:01:22.530 STDOUT terraform:  } 2025-05-30 00:01:22.530529 | orchestrator | 00:01:22.530 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.530557 | orchestrator | 00:01:22.530 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-30 00:01:22.530568 | orchestrator | 00:01:22.530 STDOUT terraform:  } 2025-05-30 00:01:22.530578 | orchestrator | 00:01:22.530 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.530610 | orchestrator | 00:01:22.530 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-30 00:01:22.530621 | orchestrator | 00:01:22.530 STDOUT terraform:  } 2025-05-30 00:01:22.530644 | orchestrator | 00:01:22.530 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.530669 | orchestrator | 00:01:22.530 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-30 00:01:22.530708 | orchestrator | 00:01:22.530 STDOUT terraform:  } 2025-05-30 00:01:22.530726 | orchestrator | 00:01:22.530 STDOUT terraform:  + binding (known after apply) 2025-05-30 00:01:22.530740 | orchestrator | 00:01:22.530 STDOUT terraform:  + fixed_ip { 2025-05-30 00:01:22.530757 | orchestrator | 00:01:22.530 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-30 00:01:22.530775 | orchestrator | 00:01:22.530 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-30 00:01:22.530788 | orchestrator | 00:01:22.530 STDOUT terraform:  } 2025-05-30 00:01:22.530800 | orchestrator | 00:01:22.530 STDOUT terraform:  } 2025-05-30 00:01:22.530834 | orchestrator | 00:01:22.530 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-30 00:01:22.530876 | orchestrator | 00:01:22.530 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-30 00:01:22.530911 | orchestrator | 00:01:22.530 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-30 00:01:22.530946 | orchestrator | 00:01:22.530 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-30 00:01:22.530985 | orchestrator | 00:01:22.530 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-30 00:01:22.531017 | orchestrator | 00:01:22.530 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.531052 | orchestrator | 00:01:22.531 STDOUT terraform:  + device_id = (known after apply) 2025-05-30 00:01:22.531088 | orchestrator | 00:01:22.531 STDOUT terraform:  + device_owner = (known after apply) 2025-05-30 00:01:22.531123 | orchestrator | 00:01:22.531 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-30 00:01:22.531159 | orchestrator | 00:01:22.531 STDOUT terraform:  + dns_name = (known after apply) 2025-05-30 00:01:22.531195 | orchestrator | 00:01:22.531 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.531233 | orchestrator | 00:01:22.531 STDOUT terraform:  + mac_address = (known after apply) 2025-05-30 00:01:22.531267 | orchestrator | 00:01:22.531 STDOUT terraform:  + network_id = (known after apply) 2025-05-30 00:01:22.531302 | orchestrator | 00:01:22.531 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-30 00:01:22.531338 | orchestrator | 00:01:22.531 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-30 00:01:22.531373 | orchestrator | 00:01:22.531 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.531408 | orchestrator | 00:01:22.531 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-30 00:01:22.531444 | orchestrator | 00:01:22.531 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.531455 | orchestrator | 00:01:22.531 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.531488 | orchestrator | 00:01:22.531 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-30 00:01:22.531499 | orchestrator | 00:01:22.531 STDOUT terraform:  } 2025-05-30 00:01:22.531509 | orchestrator | 00:01:22.531 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.531543 | orchestrator | 00:01:22.531 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-30 00:01:22.531560 | orchestrator | 00:01:22.531 STDOUT terraform:  } 2025-05-30 00:01:22.531570 | orchestrator | 00:01:22.531 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.531598 | orchestrator | 00:01:22.531 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-30 00:01:22.531609 | orchestrator | 00:01:22.531 STDOUT terraform:  } 2025-05-30 00:01:22.531618 | orchestrator | 00:01:22.531 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.531651 | orchestrator | 00:01:22.531 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-30 00:01:22.531662 | orchestrator | 00:01:22.531 STDOUT terraform:  } 2025-05-30 00:01:22.531685 | orchestrator | 00:01:22.531 STDOUT terraform:  + binding (known after apply) 2025-05-30 00:01:22.531715 | orchestrator | 00:01:22.531 STDOUT terraform:  + fixed_ip { 2025-05-30 00:01:22.531742 | orchestrator | 00:01:22.531 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-30 00:01:22.531772 | orchestrator | 00:01:22.531 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-30 00:01:22.531784 | orchestrator | 00:01:22.531 STDOUT terraform:  } 2025-05-30 00:01:22.531792 | orchestrator | 00:01:22.531 STDOUT terraform:  } 2025-05-30 00:01:22.531882 | orchestrator | 00:01:22.531 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-30 00:01:22.531909 | orchestrator | 00:01:22.531 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-30 00:01:22.531914 | orchestrator | 00:01:22.531 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-30 00:01:22.531939 | orchestrator | 00:01:22.531 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-30 00:01:22.531973 | orchestrator | 00:01:22.531 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-30 00:01:22.532014 | orchestrator | 00:01:22.531 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.532045 | orchestrator | 00:01:22.532 STDOUT terraform:  + device_id = (known after apply) 2025-05-30 00:01:22.532079 | orchestrator | 00:01:22.532 STDOUT terraform:  + device_owner = (known after apply) 2025-05-30 00:01:22.532117 | orchestrator | 00:01:22.532 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-30 00:01:22.532152 | orchestrator | 00:01:22.532 STDOUT terraform:  + dns_name = (known after apply) 2025-05-30 00:01:22.532187 | orchestrator | 00:01:22.532 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.532222 | orchestrator | 00:01:22.532 STDOUT terraform:  + mac_address = (known after apply) 2025-05-30 00:01:22.532257 | orchestrator | 00:01:22.532 STDOUT terraform:  + network_id = (known after apply) 2025-05-30 00:01:22.532291 | orchestrator | 00:01:22.532 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-30 00:01:22.532326 | orchestrator | 00:01:22.532 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-30 00:01:22.532362 | orchestrator | 00:01:22.532 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.532394 | orchestrator | 00:01:22.532 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-30 00:01:22.532429 | orchestrator | 00:01:22.532 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.532448 | orchestrator | 00:01:22.532 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.532476 | orchestrator | 00:01:22.532 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-30 00:01:22.532490 | orchestrator | 00:01:22.532 STDOUT terraform:  } 2025-05-30 00:01:22.532510 | orchestrator | 00:01:22.532 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.532538 | orchestrator | 00:01:22.532 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-30 00:01:22.532552 | orchestrator | 00:01:22.532 STDOUT terraform:  } 2025-05-30 00:01:22.532571 | orchestrator | 00:01:22.532 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.532598 | orchestrator | 00:01:22.532 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-30 00:01:22.532611 | orchestrator | 00:01:22.532 STDOUT terraform:  } 2025-05-30 00:01:22.532630 | orchestrator | 00:01:22.532 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.532657 | orchestrator | 00:01:22.532 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-30 00:01:22.532666 | orchestrator | 00:01:22.532 STDOUT terraform:  } 2025-05-30 00:01:22.532690 | orchestrator | 00:01:22.532 STDOUT terraform:  + binding (known after apply) 2025-05-30 00:01:22.532715 | orchestrator | 00:01:22.532 STDOUT terraform:  + fixed_ip { 2025-05-30 00:01:22.532739 | orchestrator | 00:01:22.532 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-30 00:01:22.532769 | orchestrator | 00:01:22.532 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-30 00:01:22.532782 | orchestrator | 00:01:22.532 STDOUT terraform:  } 2025-05-30 00:01:22.532795 | orchestrator | 00:01:22.532 STDOUT terraform:  } 2025-05-30 00:01:22.532842 | orchestrator | 00:01:22.532 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-30 00:01:22.532885 | orchestrator | 00:01:22.532 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-30 00:01:22.532920 | orchestrator | 00:01:22.532 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-30 00:01:22.532955 | orchestrator | 00:01:22.532 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-30 00:01:22.532991 | orchestrator | 00:01:22.532 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-30 00:01:22.533032 | orchestrator | 00:01:22.532 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.533067 | orchestrator | 00:01:22.533 STDOUT terraform:  + device_id = (known after apply) 2025-05-30 00:01:22.533102 | orchestrator | 00:01:22.533 STDOUT terraform:  + device_owner = (known after apply) 2025-05-30 00:01:22.533137 | orchestrator | 00:01:22.533 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-30 00:01:22.533172 | orchestrator | 00:01:22.533 STDOUT terraform:  + dns_name = (known after apply) 2025-05-30 00:01:22.533208 | orchestrator | 00:01:22.533 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.533244 | orchestrator | 00:01:22.533 STDOUT terraform:  + mac_address = (known after apply) 2025-05-30 00:01:22.533280 | orchestrator | 00:01:22.533 STDOUT terraform:  + network_id = (known after apply) 2025-05-30 00:01:22.533314 | orchestrator | 00:01:22.533 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-30 00:01:22.533349 | orchestrator | 00:01:22.533 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-30 00:01:22.533383 | orchestrator | 00:01:22.533 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.533418 | orchestrator | 00:01:22.533 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-30 00:01:22.533454 | orchestrator | 00:01:22.533 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.533478 | orchestrator | 00:01:22.533 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.533503 | orchestrator | 00:01:22.533 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-30 00:01:22.533509 | orchestrator | 00:01:22.533 STDOUT terraform:  } 2025-05-30 00:01:22.533532 | orchestrator | 00:01:22.533 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.533561 | orchestrator | 00:01:22.533 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-30 00:01:22.533567 | orchestrator | 00:01:22.533 STDOUT terraform:  } 2025-05-30 00:01:22.533594 | orchestrator | 00:01:22.533 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.533708 | orchestrator | 00:01:22.533 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-30 00:01:22.533714 | orchestrator | 00:01:22.533 STDOUT terraform:  } 2025-05-30 00:01:22.533718 | orchestrator | 00:01:22.533 STDOUT terraform:  + allowed_address_pairs { 2025-05-30 00:01:22.533722 | orchestrator | 00:01:22.533 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-30 00:01:22.533726 | orchestrator | 00:01:22.533 STDOUT terraform:  } 2025-05-30 00:01:22.533730 | orchestrator | 00:01:22.533 STDOUT terraform:  + binding (known after apply) 2025-05-30 00:01:22.533782 | orchestrator | 00:01:22.533 STDOUT terraform:  + fixed_ip { 2025-05-30 00:01:22.533798 | orchestrator | 00:01:22.533 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-30 00:01:22.533839 | orchestrator | 00:01:22.533 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-30 00:01:22.533844 | orchestrator | 00:01:22.533 STDOUT terraform:  } 2025-05-30 00:01:22.533849 | orchestrator | 00:01:22.533 STDOUT terraform:  } 2025-05-30 00:01:22.534259 | orchestrator | 00:01:22.533 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-30 00:01:22.534273 | orchestrator | 00:01:22.533 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-30 00:01:22.534278 | orchestrator | 00:01:22.533 STDOUT terraform:  + force_destroy = false 2025-05-30 00:01:22.534282 | orchestrator | 00:01:22.533 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.534286 | orchestrator | 00:01:22.533 STDOUT terraform:  + port_id = (known after apply) 2025-05-30 00:01:22.534289 | orchestrator | 00:01:22.534 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.534293 | orchestrator | 00:01:22.534 STDOUT terraform:  + router_id = (known after apply) 2025-05-30 00:01:22.534302 | orchestrator | 00:01:22.534 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-30 00:01:22.534306 | orchestrator | 00:01:22.534 STDOUT terraform:  } 2025-05-30 00:01:22.534310 | orchestrator | 00:01:22.534 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-30 00:01:22.534314 | orchestrator | 00:01:22.534 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-30 00:01:22.534318 | orchestrator | 00:01:22.534 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-30 00:01:22.534321 | orchestrator | 00:01:22.534 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.534325 | orchestrator | 00:01:22.534 STDOUT terraform:  + availability_zone_hints = [ 2025-05-30 00:01:22.534331 | orchestrator | 00:01:22.534 STDOUT terraform:  + "nova", 2025-05-30 00:01:22.534335 | orchestrator | 00:01:22.534 STDOUT terraform:  ] 2025-05-30 00:01:22.534341 | orchestrator | 00:01:22.534 STDOUT terraform:  + distributed = (known after apply) 2025-05-30 00:01:22.534346 | orchestrator | 00:01:22.534 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-30 00:01:22.534511 | orchestrator | 00:01:22.534 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-30 00:01:22.534517 | orchestrator | 00:01:22.534 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.534521 | orchestrator | 00:01:22.534 STDOUT terraform:  + name = "testbed" 2025-05-30 00:01:22.534525 | orchestrator | 00:01:22.534 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.534529 | orchestrator | 00:01:22.534 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.534534 | orchestrator | 00:01:22.534 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-30 00:01:22.535899 | orchestrator | 00:01:22.534 STDOUT terraform:  } 2025-05-30 00:01:22.535916 | orchestrator | 00:01:22.534 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-30 00:01:22.535921 | orchestrator | 00:01:22.534 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-30 00:01:22.535926 | orchestrator | 00:01:22.534 STDOUT terraform:  + description = "ssh" 2025-05-30 00:01:22.535930 | orchestrator | 00:01:22.534 STDOUT terraform:  + direction = "ingress" 2025-05-30 00:01:22.535934 | orchestrator | 00:01:22.534 STDOUT terraform:  + ethertype = "IPv4" 2025-05-30 00:01:22.535938 | orchestrator | 00:01:22.534 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.535941 | orchestrator | 00:01:22.534 STDOUT terraform:  + port_range_max = 22 2025-05-30 00:01:22.535945 | orchestrator | 00:01:22.534 STDOUT terraform:  + port_range_min = 22 2025-05-30 00:01:22.535949 | orchestrator | 00:01:22.534 STDOUT terraform:  + protocol = "tcp" 2025-05-30 00:01:22.535953 | orchestrator | 00:01:22.534 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.535956 | orchestrator | 00:01:22.534 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-30 00:01:22.535960 | orchestrator | 00:01:22.534 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-30 00:01:22.535969 | orchestrator | 00:01:22.534 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-30 00:01:22.535973 | orchestrator | 00:01:22.534 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.535977 | orchestrator | 00:01:22.534 STDOUT terraform:  } 2025-05-30 00:01:22.535981 | orchestrator | 00:01:22.534 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-30 00:01:22.535985 | orchestrator | 00:01:22.534 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-30 00:01:22.535988 | orchestrator | 00:01:22.535 STDOUT terraform:  + description = "wireguard" 2025-05-30 00:01:22.535992 | orchestrator | 00:01:22.535 STDOUT terraform:  + direction = "ingress" 2025-05-30 00:01:22.535996 | orchestrator | 00:01:22.535 STDOUT terraform:  + ethertype = "IPv4" 2025-05-30 00:01:22.536000 | orchestrator | 00:01:22.535 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.536003 | orchestrator | 00:01:22.535 STDOUT terraform:  + port_range_max = 51820 2025-05-30 00:01:22.536007 | orchestrator | 00:01:22.535 STDOUT terraform:  + port_range_min = 51820 2025-05-30 00:01:22.536011 | orchestrator | 00:01:22.535 STDOUT terraform:  + protocol = "udp" 2025-05-30 00:01:22.536015 | orchestrator | 00:01:22.535 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.536019 | orchestrator | 00:01:22.535 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-30 00:01:22.536023 | orchestrator | 00:01:22.535 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-30 00:01:22.536027 | orchestrator | 00:01:22.535 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-30 00:01:22.536030 | orchestrator | 00:01:22.535 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.536038 | orchestrator | 00:01:22.535 STDOUT terraform:  } 2025-05-30 00:01:22.536042 | orchestrator | 00:01:22.535 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-30 00:01:22.536046 | orchestrator | 00:01:22.535 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-30 00:01:22.536050 | orchestrator | 00:01:22.535 STDOUT terraform:  + direction = "ingress" 2025-05-30 00:01:22.536054 | orchestrator | 00:01:22.535 STDOUT terraform:  + ethertype = "IPv4" 2025-05-30 00:01:22.536059 | orchestrator | 00:01:22.535 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.536063 | orchestrator | 00:01:22.535 STDOUT terraform:  + protocol = "tcp" 2025-05-30 00:01:22.536072 | orchestrator | 00:01:22.535 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.536076 | orchestrator | 00:01:22.535 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-30 00:01:22.536080 | orchestrator | 00:01:22.535 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-30 00:01:22.536083 | orchestrator | 00:01:22.535 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-30 00:01:22.536087 | orchestrator | 00:01:22.535 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.536094 | orchestrator | 00:01:22.535 STDOUT terraform:  } 2025-05-30 00:01:22.536097 | orchestrator | 00:01:22.535 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-30 00:01:22.536101 | orchestrator | 00:01:22.535 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-30 00:01:22.536105 | orchestrator | 00:01:22.535 STDOUT terraform:  + direction = "ingress" 2025-05-30 00:01:22.536109 | orchestrator | 00:01:22.535 STDOUT terraform:  + ethertype = "IPv4" 2025-05-30 00:01:22.536113 | orchestrator | 00:01:22.535 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.536116 | orchestrator | 00:01:22.535 STDOUT terraform:  + protocol = "udp" 2025-05-30 00:01:22.536120 | orchestrator | 00:01:22.535 STDOUT terraform:  + r 2025-05-30 00:01:22.536124 | orchestrator | 00:01:22.535 STDOUT terraform: egion = (known after apply) 2025-05-30 00:01:22.536128 | orchestrator | 00:01:22.535 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-30 00:01:22.536131 | orchestrator | 00:01:22.535 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-30 00:01:22.536135 | orchestrator | 00:01:22.535 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-30 00:01:22.536139 | orchestrator | 00:01:22.535 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.536143 | orchestrator | 00:01:22.535 STDOUT terraform:  } 2025-05-30 00:01:22.536146 | orchestrator | 00:01:22.535 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-30 00:01:22.536150 | orchestrator | 00:01:22.536 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-30 00:01:22.536155 | orchestrator | 00:01:22.536 STDOUT terraform:  + direction = "ingress" 2025-05-30 00:01:22.536159 | orchestrator | 00:01:22.536 STDOUT terraform:  + ethertype = "IPv4" 2025-05-30 00:01:22.536163 | orchestrator | 00:01:22.536 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.536167 | orchestrator | 00:01:22.536 STDOUT terraform:  + protocol = "icmp" 2025-05-30 00:01:22.537942 | orchestrator | 00:01:22.536 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.537981 | orchestrator | 00:01:22.536 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-30 00:01:22.537988 | orchestrator | 00:01:22.536 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-30 00:01:22.537992 | orchestrator | 00:01:22.536 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-30 00:01:22.537996 | orchestrator | 00:01:22.536 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.538000 | orchestrator | 00:01:22.536 STDOUT terraform:  } 2025-05-30 00:01:22.538004 | orchestrator | 00:01:22.536 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-30 00:01:22.538009 | orchestrator | 00:01:22.536 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-30 00:01:22.538025 | orchestrator | 00:01:22.536 STDOUT terraform:  + direction = "ingress" 2025-05-30 00:01:22.538030 | orchestrator | 00:01:22.536 STDOUT terraform:  + ethertype = "IPv4" 2025-05-30 00:01:22.538042 | orchestrator | 00:01:22.536 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.538054 | orchestrator | 00:01:22.536 STDOUT terraform:  + protocol = "tcp" 2025-05-30 00:01:22.538060 | orchestrator | 00:01:22.536 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.538067 | orchestrator | 00:01:22.536 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-30 00:01:22.538073 | orchestrator | 00:01:22.536 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-30 00:01:22.538080 | orchestrator | 00:01:22.536 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-30 00:01:22.538086 | orchestrator | 00:01:22.536 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.538092 | orchestrator | 00:01:22.536 STDOUT terraform:  } 2025-05-30 00:01:22.538099 | orchestrator | 00:01:22.536 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-30 00:01:22.538105 | orchestrator | 00:01:22.536 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-30 00:01:22.538112 | orchestrator | 00:01:22.536 STDOUT terraform:  + direction = "ingress" 2025-05-30 00:01:22.538118 | orchestrator | 00:01:22.536 STDOUT terraform:  + ethertype = "IPv4" 2025-05-30 00:01:22.538125 | orchestrator | 00:01:22.536 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.538131 | orchestrator | 00:01:22.536 STDOUT terraform:  + protocol = "udp" 2025-05-30 00:01:22.538137 | orchestrator | 00:01:22.536 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.538144 | orchestrator | 00:01:22.536 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-30 00:01:22.538150 | orchestrator | 00:01:22.536 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-30 00:01:22.538156 | orchestrator | 00:01:22.536 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-30 00:01:22.538163 | orchestrator | 00:01:22.536 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.538169 | orchestrator | 00:01:22.536 STDOUT terraform:  } 2025-05-30 00:01:22.538175 | orchestrator | 00:01:22.536 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-30 00:01:22.538182 | orchestrator | 00:01:22.536 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-30 00:01:22.538189 | orchestrator | 00:01:22.536 STDOUT terraform:  + direction = "ingress" 2025-05-30 00:01:22.538195 | orchestrator | 00:01:22.537 STDOUT terraform:  + ethertype = "IPv4" 2025-05-30 00:01:22.538202 | orchestrator | 00:01:22.537 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.538209 | orchestrator | 00:01:22.537 STDOUT terraform:  + protocol = "icmp" 2025-05-30 00:01:22.538223 | orchestrator | 00:01:22.537 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.538230 | orchestrator | 00:01:22.537 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-30 00:01:22.538237 | orchestrator | 00:01:22.537 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-30 00:01:22.538249 | orchestrator | 00:01:22.537 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-30 00:01:22.538255 | orchestrator | 00:01:22.537 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.538262 | orchestrator | 00:01:22.537 STDOUT terraform:  } 2025-05-30 00:01:22.538269 | orchestrator | 00:01:22.537 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-30 00:01:22.538275 | orchestrator | 00:01:22.537 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-30 00:01:22.538283 | orchestrator | 00:01:22.537 STDOUT terraform:  + description = "vrrp" 2025-05-30 00:01:22.538289 | orchestrator | 00:01:22.537 STDOUT terraform:  + direction = "ingress" 2025-05-30 00:01:22.538296 | orchestrator | 00:01:22.537 STDOUT terraform:  + ethertype = "IPv4" 2025-05-30 00:01:22.538303 | orchestrator | 00:01:22.537 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.538312 | orchestrator | 00:01:22.537 STDOUT terraform:  + protocol = "112" 2025-05-30 00:01:22.538320 | orchestrator | 00:01:22.537 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.538326 | orchestrator | 00:01:22.537 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-30 00:01:22.538333 | orchestrator | 00:01:22.537 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-30 00:01:22.538339 | orchestrator | 00:01:22.537 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-30 00:01:22.538345 | orchestrator | 00:01:22.537 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.538351 | orchestrator | 00:01:22.537 STDOUT terraform:  } 2025-05-30 00:01:22.538357 | orchestrator | 00:01:22.537 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-30 00:01:22.538364 | orchestrator | 00:01:22.537 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-30 00:01:22.538370 | orchestrator | 00:01:22.537 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.538377 | orchestrator | 00:01:22.537 STDOUT terraform:  + description = "management security group" 2025-05-30 00:01:22.538384 | orchestrator | 00:01:22.537 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.538390 | orchestrator | 00:01:22.537 STDOUT terraform:  + name = "testbed-management" 2025-05-30 00:01:22.538396 | orchestrator | 00:01:22.537 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.538402 | orchestrator | 00:01:22.537 STDOUT terraform:  + stateful = (known after apply) 2025-05-30 00:01:22.538409 | orchestrator | 00:01:22.537 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.538415 | orchestrator | 00:01:22.537 STDOUT terraform:  } 2025-05-30 00:01:22.538422 | orchestrator | 00:01:22.537 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-30 00:01:22.538428 | orchestrator | 00:01:22.537 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-30 00:01:22.538434 | orchestrator | 00:01:22.537 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.538440 | orchestrator | 00:01:22.537 STDOUT terraform:  + description = "node security group" 2025-05-30 00:01:22.538451 | orchestrator | 00:01:22.537 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.538458 | orchestrator | 00:01:22.537 STDOUT terraform:  + name = "testbed-node" 2025-05-30 00:01:22.538464 | orchestrator | 00:01:22.537 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.538471 | orchestrator | 00:01:22.538 STDOUT terraform:  + stateful = (known after apply) 2025-05-30 00:01:22.538477 | orchestrator | 00:01:22.538 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.538490 | orchestrator | 00:01:22.538 STDOUT terraform:  } 2025-05-30 00:01:22.538497 | orchestrator | 00:01:22.538 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-30 00:01:22.538503 | orchestrator | 00:01:22.538 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-30 00:01:22.538510 | orchestrator | 00:01:22.538 STDOUT terraform:  + all_tags = (known after apply) 2025-05-30 00:01:22.538516 | orchestrator | 00:01:22.538 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-30 00:01:22.538522 | orchestrator | 00:01:22.538 STDOUT terraform:  + dns_nameservers = [ 2025-05-30 00:01:22.538529 | orchestrator | 00:01:22.538 STDOUT terraform:  + "8.8.8.8", 2025-05-30 00:01:22.538535 | orchestrator | 00:01:22.538 STDOUT terraform:  + "9.9.9.9", 2025-05-30 00:01:22.538541 | orchestrator | 00:01:22.538 STDOUT terraform:  ] 2025-05-30 00:01:22.538548 | orchestrator | 00:01:22.538 STDOUT terraform:  + enable_dhcp = true 2025-05-30 00:01:22.538554 | orchestrator | 00:01:22.538 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-30 00:01:22.538561 | orchestrator | 00:01:22.538 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.538567 | orchestrator | 00:01:22.538 STDOUT terraform:  + ip_version = 4 2025-05-30 00:01:22.538574 | orchestrator | 00:01:22.538 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-30 00:01:22.538580 | orchestrator | 00:01:22.538 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-30 00:01:22.538587 | orchestrator | 00:01:22.538 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-30 00:01:22.538593 | orchestrator | 00:01:22.538 STDOUT terraform:  + network_id = (known after apply) 2025-05-30 00:01:22.538600 | orchestrator | 00:01:22.538 STDOUT terraform:  + no_gateway = false 2025-05-30 00:01:22.538610 | orchestrator | 00:01:22.538 STDOUT terraform:  + region = (known after apply) 2025-05-30 00:01:22.538618 | orchestrator | 00:01:22.538 STDOUT terraform:  + service_types = (known after apply) 2025-05-30 00:01:22.538625 | orchestrator | 00:01:22.538 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-30 00:01:22.538632 | orchestrator | 00:01:22.538 STDOUT terraform:  + allocation_pool { 2025-05-30 00:01:22.538638 | orchestrator | 00:01:22.538 STDOUT terraform:  + end = "192.168.31.250" 2025-05-30 00:01:22.538647 | orchestrator | 00:01:22.538 STDOUT terraform:  + start = "192.168.31.200" 2025-05-30 00:01:22.538654 | orchestrator | 00:01:22.538 STDOUT terraform:  } 2025-05-30 00:01:22.538660 | orchestrator | 00:01:22.538 STDOUT terraform:  } 2025-05-30 00:01:22.538672 | orchestrator | 00:01:22.538 STDOUT terraform:  # terraform_data.image will be created 2025-05-30 00:01:22.538739 | orchestrator | 00:01:22.538 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-30 00:01:22.538752 | orchestrator | 00:01:22.538 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.538759 | orchestrator | 00:01:22.538 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-30 00:01:22.538766 | orchestrator | 00:01:22.538 STDOUT terraform:  + output = (known after apply) 2025-05-30 00:01:22.538774 | orchestrator | 00:01:22.538 STDOUT terraform:  } 2025-05-30 00:01:22.538782 | orchestrator | 00:01:22.538 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-30 00:01:22.538813 | orchestrator | 00:01:22.538 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-30 00:01:22.538849 | orchestrator | 00:01:22.538 STDOUT terraform:  + id = (known after apply) 2025-05-30 00:01:22.538859 | orchestrator | 00:01:22.538 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-30 00:01:22.538884 | orchestrator | 00:01:22.538 STDOUT terraform:  + output = (known after apply) 2025-05-30 00:01:22.538891 | orchestrator | 00:01:22.538 STDOUT terraform:  } 2025-05-30 00:01:22.538916 | orchestrator | 00:01:22.538 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-30 00:01:22.538926 | orchestrator | 00:01:22.538 STDOUT terraform: Changes to Outputs: 2025-05-30 00:01:22.538951 | orchestrator | 00:01:22.538 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-30 00:01:22.538960 | orchestrator | 00:01:22.538 STDOUT terraform:  + private_key = (sensitive value) 2025-05-30 00:01:22.751470 | orchestrator | 00:01:22.751 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-30 00:01:22.754757 | orchestrator | 00:01:22.754 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=1aec5a9b-4a2c-2ff7-5309-a75116cf26a7] 2025-05-30 00:01:22.755094 | orchestrator | 00:01:22.754 STDOUT terraform: terraform_data.image: Creating... 2025-05-30 00:01:22.758453 | orchestrator | 00:01:22.758 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=7f6e68a2-42dd-2368-2451-1822787e1410] 2025-05-30 00:01:22.774615 | orchestrator | 00:01:22.774 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-30 00:01:22.775256 | orchestrator | 00:01:22.775 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-30 00:01:22.775463 | orchestrator | 00:01:22.775 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-30 00:01:22.781786 | orchestrator | 00:01:22.781 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-30 00:01:22.781832 | orchestrator | 00:01:22.781 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-30 00:01:22.781866 | orchestrator | 00:01:22.781 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-30 00:01:22.781913 | orchestrator | 00:01:22.781 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-30 00:01:22.782227 | orchestrator | 00:01:22.782 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-30 00:01:22.788883 | orchestrator | 00:01:22.788 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-30 00:01:22.788933 | orchestrator | 00:01:22.788 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-30 00:01:23.240844 | orchestrator | 00:01:23.240 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-05-30 00:01:23.249349 | orchestrator | 00:01:23.249 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-30 00:01:23.254349 | orchestrator | 00:01:23.254 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-30 00:01:23.254716 | orchestrator | 00:01:23.254 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-30 00:01:23.262579 | orchestrator | 00:01:23.262 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-30 00:01:23.263234 | orchestrator | 00:01:23.263 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-30 00:01:28.726061 | orchestrator | 00:01:28.725 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=08843fe6-fafc-4dc8-a9c9-1ecc98c6cae8] 2025-05-30 00:01:28.746299 | orchestrator | 00:01:28.746 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-30 00:01:28.753645 | orchestrator | 00:01:28.753 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=02374b222183070a37a2ce575b1cdd52584c4893] 2025-05-30 00:01:28.771062 | orchestrator | 00:01:28.770 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-30 00:01:28.777886 | orchestrator | 00:01:28.777 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=16e7dee1f6153793303219f0d706a9b36e65b226] 2025-05-30 00:01:28.787750 | orchestrator | 00:01:28.787 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-30 00:01:32.783182 | orchestrator | 00:01:32.782 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-30 00:01:32.783285 | orchestrator | 00:01:32.783 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-30 00:01:32.783441 | orchestrator | 00:01:32.783 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-30 00:01:32.783575 | orchestrator | 00:01:32.783 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-30 00:01:32.783782 | orchestrator | 00:01:32.783 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-30 00:01:32.789309 | orchestrator | 00:01:32.789 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-30 00:01:33.255179 | orchestrator | 00:01:33.254 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-30 00:01:33.263409 | orchestrator | 00:01:33.263 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-30 00:01:33.264429 | orchestrator | 00:01:33.264 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-30 00:01:33.337735 | orchestrator | 00:01:33.337 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=fcd55a48-2b4a-45aa-bb97-767fc341b1ef] 2025-05-30 00:01:33.342656 | orchestrator | 00:01:33.342 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=2529d57e-ffb4-494c-a22f-a2bb1703f8b2] 2025-05-30 00:01:33.344839 | orchestrator | 00:01:33.344 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-30 00:01:33.349442 | orchestrator | 00:01:33.349 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-30 00:01:33.394070 | orchestrator | 00:01:33.388 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=173bbd31-d008-4662-8aea-7cfb1ab21884] 2025-05-30 00:01:33.398074 | orchestrator | 00:01:33.397 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=5232ed07-4d85-4988-9bc7-7d761a8f0a42] 2025-05-30 00:01:33.400661 | orchestrator | 00:01:33.400 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=c7216231-2c47-48eb-b4a1-b98b10008028] 2025-05-30 00:01:33.403889 | orchestrator | 00:01:33.403 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-30 00:01:33.404471 | orchestrator | 00:01:33.404 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-30 00:01:33.410567 | orchestrator | 00:01:33.410 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-30 00:01:33.411999 | orchestrator | 00:01:33.411 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=8d1e0c18-9aac-4f03-b30e-87512c271b47] 2025-05-30 00:01:33.423972 | orchestrator | 00:01:33.423 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-30 00:01:33.451504 | orchestrator | 00:01:33.451 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=fd28e93c-f7f0-4d71-9af0-3817aadd609f] 2025-05-30 00:01:33.458214 | orchestrator | 00:01:33.457 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-30 00:01:33.475049 | orchestrator | 00:01:33.474 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=d57cbd6a-67f1-4040-83cf-671f4c3c6a1f] 2025-05-30 00:01:33.480105 | orchestrator | 00:01:33.479 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=76f37bde-13ed-44ba-8084-a2417c9798d9] 2025-05-30 00:01:38.788872 | orchestrator | 00:01:38.788 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-30 00:01:39.094425 | orchestrator | 00:01:39.094 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=19ca4269-f512-4358-b365-f38392d919ee] 2025-05-30 00:01:39.356203 | orchestrator | 00:01:39.355 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=0107a820-ef09-4c0f-8ff7-2329172a3ddb] 2025-05-30 00:01:39.363604 | orchestrator | 00:01:39.363 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-30 00:01:43.346263 | orchestrator | 00:01:43.345 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-30 00:01:43.350491 | orchestrator | 00:01:43.350 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-30 00:01:43.404796 | orchestrator | 00:01:43.404 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-30 00:01:43.406686 | orchestrator | 00:01:43.406 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-30 00:01:43.412013 | orchestrator | 00:01:43.411 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-30 00:01:43.426421 | orchestrator | 00:01:43.426 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-30 00:01:43.731302 | orchestrator | 00:01:43.730 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=62bf4b98-4a21-4975-9c67-1ea56f697b51] 2025-05-30 00:01:43.747900 | orchestrator | 00:01:43.747 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=9a6319b3-0c44-4d2f-bfc1-43899b1e392d] 2025-05-30 00:01:43.779633 | orchestrator | 00:01:43.779 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=43763239-3247-473a-87fc-14ea183bb8af] 2025-05-30 00:01:43.788949 | orchestrator | 00:01:43.788 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=c29df819-5e55-4aea-aecd-e9fcfd91068f] 2025-05-30 00:01:43.808614 | orchestrator | 00:01:43.808 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=edc9b60b-d3ff-41c2-8d12-039335a3b5c5] 2025-05-30 00:01:43.836201 | orchestrator | 00:01:43.835 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=770f164d-60f5-482d-a3bc-9c475531a1a8] 2025-05-30 00:01:47.222091 | orchestrator | 00:01:47.221 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=697c3fd8-315d-4afb-b4e8-1a9502a56dc8] 2025-05-30 00:01:47.228469 | orchestrator | 00:01:47.228 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-30 00:01:47.228538 | orchestrator | 00:01:47.228 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-30 00:01:47.229503 | orchestrator | 00:01:47.229 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-30 00:01:47.435880 | orchestrator | 00:01:47.431 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=7995ed8b-8b70-4a29-b830-05e3d751328b] 2025-05-30 00:01:47.439382 | orchestrator | 00:01:47.439 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-30 00:01:47.440240 | orchestrator | 00:01:47.440 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-30 00:01:47.442738 | orchestrator | 00:01:47.442 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=da7dd801-614b-4fac-a62c-0988ba16d787] 2025-05-30 00:01:47.446902 | orchestrator | 00:01:47.446 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-30 00:01:47.447334 | orchestrator | 00:01:47.447 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-30 00:01:47.448143 | orchestrator | 00:01:47.447 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-30 00:01:47.457199 | orchestrator | 00:01:47.456 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-30 00:01:47.457567 | orchestrator | 00:01:47.457 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-30 00:01:47.458304 | orchestrator | 00:01:47.458 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-30 00:01:47.458536 | orchestrator | 00:01:47.458 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-30 00:01:47.605317 | orchestrator | 00:01:47.604 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=206cd4bc-ee15-4984-9443-4c780bbd9300] 2025-05-30 00:01:47.620021 | orchestrator | 00:01:47.619 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-30 00:01:47.654514 | orchestrator | 00:01:47.654 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=bd6a90eb-c214-472a-8fcc-89fdfc01dd38] 2025-05-30 00:01:47.669879 | orchestrator | 00:01:47.669 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-30 00:01:47.796415 | orchestrator | 00:01:47.796 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=75ceee52-6e97-4881-9ce8-29df10fcd758] 2025-05-30 00:01:47.809553 | orchestrator | 00:01:47.809 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-30 00:01:47.857185 | orchestrator | 00:01:47.856 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=7d9cc135-bfd2-4fad-8f7b-0822ca13ba4f] 2025-05-30 00:01:47.876054 | orchestrator | 00:01:47.875 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-30 00:01:48.010699 | orchestrator | 00:01:48.010 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=004141fe-655f-40fa-99de-9b32e60b346e] 2025-05-30 00:01:48.022546 | orchestrator | 00:01:48.022 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-30 00:01:48.163212 | orchestrator | 00:01:48.162 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=a055edcc-dc31-47bf-9e76-1d3549d6092f] 2025-05-30 00:01:48.168931 | orchestrator | 00:01:48.168 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-30 00:01:48.386270 | orchestrator | 00:01:48.385 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=7350e142-e8c0-4eec-879b-08e369fafdfc] 2025-05-30 00:01:48.400979 | orchestrator | 00:01:48.400 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-30 00:01:48.460542 | orchestrator | 00:01:48.460 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=70a293e6-fdfd-4e22-ba95-ccbe0b760def] 2025-05-30 00:01:48.657003 | orchestrator | 00:01:48.656 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=98cf3326-f8ee-4ae0-8226-15e64b5310ba] 2025-05-30 00:01:53.416451 | orchestrator | 00:01:53.416 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=9e0a8cd1-92c2-44d1-8518-06325ccd0cdb] 2025-05-30 00:01:53.480673 | orchestrator | 00:01:53.480 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 5s [id=847b04d7-0cfd-4f66-b3fd-05673c8e42cf] 2025-05-30 00:01:53.611538 | orchestrator | 00:01:53.611 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=7c20c5a5-779c-4313-8249-3ee0be5ed13d] 2025-05-30 00:01:53.683397 | orchestrator | 00:01:53.683 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=ee7d5088-2418-43cd-9176-6e6a666d8f96] 2025-05-30 00:01:53.709943 | orchestrator | 00:01:53.709 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=f5387598-7596-4a7d-a4e0-620fd1f63c9b] 2025-05-30 00:01:53.831668 | orchestrator | 00:01:53.831 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 7s [id=ac1692ab-98fa-479d-b591-d5da7116a4eb] 2025-05-30 00:01:53.945686 | orchestrator | 00:01:53.945 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=a153f5b9-f51b-4c62-994f-4499ff56957f] 2025-05-30 00:01:55.294889 | orchestrator | 00:01:55.294 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=fd697430-6a8f-4b1e-991d-f9468fce1e7e] 2025-05-30 00:01:55.326371 | orchestrator | 00:01:55.326 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-30 00:01:55.338094 | orchestrator | 00:01:55.337 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-30 00:01:55.338806 | orchestrator | 00:01:55.338 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-30 00:01:55.345388 | orchestrator | 00:01:55.345 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-30 00:01:55.353783 | orchestrator | 00:01:55.353 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-30 00:01:55.358839 | orchestrator | 00:01:55.358 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-30 00:01:55.359849 | orchestrator | 00:01:55.359 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-30 00:02:01.905828 | orchestrator | 00:02:01.905 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=ba743926-b1c7-4084-a59f-caefa972c0c6] 2025-05-30 00:02:01.916542 | orchestrator | 00:02:01.916 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-30 00:02:01.922772 | orchestrator | 00:02:01.922 STDOUT terraform: local_file.inventory: Creating... 2025-05-30 00:02:01.924824 | orchestrator | 00:02:01.924 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-30 00:02:01.934228 | orchestrator | 00:02:01.933 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=f7791594e0f809d88bce5737936e2d621a8dd7e1] 2025-05-30 00:02:01.935182 | orchestrator | 00:02:01.934 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=980ab9e04798b818a9befa08fea9becca0475620] 2025-05-30 00:02:03.219044 | orchestrator | 00:02:03.218 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=ba743926-b1c7-4084-a59f-caefa972c0c6] 2025-05-30 00:02:05.339668 | orchestrator | 00:02:05.339 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-30 00:02:05.341981 | orchestrator | 00:02:05.341 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-30 00:02:05.347171 | orchestrator | 00:02:05.347 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-30 00:02:05.355565 | orchestrator | 00:02:05.355 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-30 00:02:05.362064 | orchestrator | 00:02:05.361 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-30 00:02:05.362082 | orchestrator | 00:02:05.361 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-30 00:02:15.342613 | orchestrator | 00:02:15.342 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-30 00:02:15.342770 | orchestrator | 00:02:15.342 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-30 00:02:15.347705 | orchestrator | 00:02:15.347 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-30 00:02:15.356080 | orchestrator | 00:02:15.355 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-30 00:02:15.363281 | orchestrator | 00:02:15.363 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-30 00:02:15.363335 | orchestrator | 00:02:15.363 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-30 00:02:15.767293 | orchestrator | 00:02:15.766 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=bc3f4a2d-b093-4252-ad24-27667e80e26b] 2025-05-30 00:02:15.796460 | orchestrator | 00:02:15.796 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=c126b5eb-c1bb-475c-b44b-677fd24f1d54] 2025-05-30 00:02:16.048586 | orchestrator | 00:02:16.048 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=7bda490c-ccca-4c4c-9918-bb00f359b5b3] 2025-05-30 00:02:16.227747 | orchestrator | 00:02:16.227 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=662534f1-45d7-4bbc-8403-05db729846d0] 2025-05-30 00:02:25.348226 | orchestrator | 00:02:25.347 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-05-30 00:02:25.364682 | orchestrator | 00:02:25.364 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-05-30 00:02:26.023282 | orchestrator | 00:02:26.022 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=fb8b0403-d3d9-41a0-8ed2-08e336ef3b06] 2025-05-30 00:02:26.052023 | orchestrator | 00:02:26.051 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=e0404baf-de23-40eb-ae22-3951de063198] 2025-05-30 00:02:26.066504 | orchestrator | 00:02:26.066 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-30 00:02:26.085642 | orchestrator | 00:02:26.085 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=846609303109603386] 2025-05-30 00:02:26.087399 | orchestrator | 00:02:26.087 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-30 00:02:26.089413 | orchestrator | 00:02:26.089 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-30 00:02:26.090214 | orchestrator | 00:02:26.089 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-30 00:02:26.090242 | orchestrator | 00:02:26.090 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-30 00:02:26.090495 | orchestrator | 00:02:26.090 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-30 00:02:26.092766 | orchestrator | 00:02:26.092 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-30 00:02:26.097660 | orchestrator | 00:02:26.097 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-30 00:02:26.104165 | orchestrator | 00:02:26.100 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-30 00:02:26.109515 | orchestrator | 00:02:26.109 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-30 00:02:26.110524 | orchestrator | 00:02:26.110 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-30 00:02:31.389660 | orchestrator | 00:02:31.389 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=7bda490c-ccca-4c4c-9918-bb00f359b5b3/fcd55a48-2b4a-45aa-bb97-767fc341b1ef] 2025-05-30 00:02:31.396918 | orchestrator | 00:02:31.396 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=662534f1-45d7-4bbc-8403-05db729846d0/76f37bde-13ed-44ba-8084-a2417c9798d9] 2025-05-30 00:02:31.418953 | orchestrator | 00:02:31.418 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=e0404baf-de23-40eb-ae22-3951de063198/8d1e0c18-9aac-4f03-b30e-87512c271b47] 2025-05-30 00:02:31.424553 | orchestrator | 00:02:31.424 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=7bda490c-ccca-4c4c-9918-bb00f359b5b3/fd28e93c-f7f0-4d71-9af0-3817aadd609f] 2025-05-30 00:02:31.448434 | orchestrator | 00:02:31.448 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=662534f1-45d7-4bbc-8403-05db729846d0/d57cbd6a-67f1-4040-83cf-671f4c3c6a1f] 2025-05-30 00:02:31.460462 | orchestrator | 00:02:31.460 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=7bda490c-ccca-4c4c-9918-bb00f359b5b3/173bbd31-d008-4662-8aea-7cfb1ab21884] 2025-05-30 00:02:31.471327 | orchestrator | 00:02:31.470 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=e0404baf-de23-40eb-ae22-3951de063198/c7216231-2c47-48eb-b4a1-b98b10008028] 2025-05-30 00:02:31.484952 | orchestrator | 00:02:31.484 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=662534f1-45d7-4bbc-8403-05db729846d0/5232ed07-4d85-4988-9bc7-7d761a8f0a42] 2025-05-30 00:02:31.501382 | orchestrator | 00:02:31.500 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=e0404baf-de23-40eb-ae22-3951de063198/2529d57e-ffb4-494c-a22f-a2bb1703f8b2] 2025-05-30 00:02:36.114242 | orchestrator | 00:02:36.113 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-30 00:02:46.114954 | orchestrator | 00:02:46.114 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-30 00:02:46.549064 | orchestrator | 00:02:46.548 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=ef06c806-cf3b-4dd4-8e46-722c042778ba] 2025-05-30 00:02:46.573867 | orchestrator | 00:02:46.573 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-30 00:02:46.573949 | orchestrator | 00:02:46.573 STDOUT terraform: Outputs: 2025-05-30 00:02:46.573961 | orchestrator | 00:02:46.573 STDOUT terraform: manager_address = 2025-05-30 00:02:46.573982 | orchestrator | 00:02:46.573 STDOUT terraform: private_key = 2025-05-30 00:02:46.682935 | orchestrator | ok: Runtime: 0:01:43.536199 2025-05-30 00:02:46.727483 | 2025-05-30 00:02:46.727672 | TASK [Fetch manager address] 2025-05-30 00:02:47.159804 | orchestrator | ok 2025-05-30 00:02:47.168098 | 2025-05-30 00:02:47.168244 | TASK [Set manager_host address] 2025-05-30 00:02:47.246997 | orchestrator | ok 2025-05-30 00:02:47.257513 | 2025-05-30 00:02:47.257652 | LOOP [Update ansible collections] 2025-05-30 00:02:48.086220 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-30 00:02:48.086598 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-30 00:02:48.086745 | orchestrator | Starting galaxy collection install process 2025-05-30 00:02:48.086789 | orchestrator | Process install dependency map 2025-05-30 00:02:48.086820 | orchestrator | Starting collection install process 2025-05-30 00:02:48.086881 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-05-30 00:02:48.086921 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-05-30 00:02:48.086957 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-30 00:02:48.087023 | orchestrator | ok: Item: commons Runtime: 0:00:00.506744 2025-05-30 00:02:48.901923 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-30 00:02:48.902059 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-30 00:02:48.902091 | orchestrator | Starting galaxy collection install process 2025-05-30 00:02:48.902114 | orchestrator | Process install dependency map 2025-05-30 00:02:48.902135 | orchestrator | Starting collection install process 2025-05-30 00:02:48.902155 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-05-30 00:02:48.902175 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-05-30 00:02:48.902195 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-30 00:02:48.902225 | orchestrator | ok: Item: services Runtime: 0:00:00.565413 2025-05-30 00:02:48.941499 | 2025-05-30 00:02:48.941782 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-30 00:02:59.560758 | orchestrator | ok 2025-05-30 00:02:59.572909 | 2025-05-30 00:02:59.573045 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-30 00:03:59.623709 | orchestrator | ok 2025-05-30 00:03:59.632885 | 2025-05-30 00:03:59.633020 | TASK [Fetch manager ssh hostkey] 2025-05-30 00:04:01.205474 | orchestrator | Output suppressed because no_log was given 2025-05-30 00:04:01.215116 | 2025-05-30 00:04:01.215275 | TASK [Get ssh keypair from terraform environment] 2025-05-30 00:04:01.750794 | orchestrator | ok: Runtime: 0:00:00.009418 2025-05-30 00:04:01.767486 | 2025-05-30 00:04:01.767655 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-30 00:04:01.799748 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-30 00:04:01.807698 | 2025-05-30 00:04:01.807814 | TASK [Run manager part 0] 2025-05-30 00:04:02.578832 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-30 00:04:02.616519 | orchestrator | 2025-05-30 00:04:02.616552 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-30 00:04:02.616558 | orchestrator | 2025-05-30 00:04:02.616569 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-30 00:04:04.821963 | orchestrator | ok: [testbed-manager] 2025-05-30 00:04:04.821999 | orchestrator | 2025-05-30 00:04:04.822049 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-30 00:04:04.822065 | orchestrator | 2025-05-30 00:04:04.822075 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-30 00:04:06.639541 | orchestrator | ok: [testbed-manager] 2025-05-30 00:04:06.639592 | orchestrator | 2025-05-30 00:04:06.639601 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-30 00:04:07.294496 | orchestrator | ok: [testbed-manager] 2025-05-30 00:04:07.294556 | orchestrator | 2025-05-30 00:04:07.294565 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-30 00:04:07.348457 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:04:07.348500 | orchestrator | 2025-05-30 00:04:07.348509 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-30 00:04:07.373383 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:04:07.373419 | orchestrator | 2025-05-30 00:04:07.373425 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-30 00:04:07.409512 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:04:07.409559 | orchestrator | 2025-05-30 00:04:07.409564 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-30 00:04:07.441057 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:04:07.441107 | orchestrator | 2025-05-30 00:04:07.441114 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-30 00:04:07.475045 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:04:07.475091 | orchestrator | 2025-05-30 00:04:07.475097 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-30 00:04:07.505330 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:04:07.505380 | orchestrator | 2025-05-30 00:04:07.505388 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-30 00:04:07.541797 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:04:07.541845 | orchestrator | 2025-05-30 00:04:07.541852 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-30 00:04:08.320244 | orchestrator | changed: [testbed-manager] 2025-05-30 00:04:08.320285 | orchestrator | 2025-05-30 00:04:08.320293 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-30 00:07:07.855787 | orchestrator | changed: [testbed-manager] 2025-05-30 00:07:07.855863 | orchestrator | 2025-05-30 00:07:07.855874 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-30 00:08:34.419059 | orchestrator | changed: [testbed-manager] 2025-05-30 00:08:34.419165 | orchestrator | 2025-05-30 00:08:34.419183 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-30 00:08:54.154406 | orchestrator | changed: [testbed-manager] 2025-05-30 00:08:54.154452 | orchestrator | 2025-05-30 00:08:54.154461 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-30 00:09:02.756212 | orchestrator | changed: [testbed-manager] 2025-05-30 00:09:02.756258 | orchestrator | 2025-05-30 00:09:02.756267 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-30 00:09:02.805870 | orchestrator | ok: [testbed-manager] 2025-05-30 00:09:02.805962 | orchestrator | 2025-05-30 00:09:02.805978 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-30 00:09:03.603032 | orchestrator | ok: [testbed-manager] 2025-05-30 00:09:03.603119 | orchestrator | 2025-05-30 00:09:03.603136 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-30 00:09:04.396656 | orchestrator | changed: [testbed-manager] 2025-05-30 00:09:04.396746 | orchestrator | 2025-05-30 00:09:04.396762 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-30 00:09:10.807342 | orchestrator | changed: [testbed-manager] 2025-05-30 00:09:10.807441 | orchestrator | 2025-05-30 00:09:10.807481 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-30 00:09:16.988245 | orchestrator | changed: [testbed-manager] 2025-05-30 00:09:16.988328 | orchestrator | 2025-05-30 00:09:16.988347 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-30 00:09:19.721366 | orchestrator | changed: [testbed-manager] 2025-05-30 00:09:19.721402 | orchestrator | 2025-05-30 00:09:19.721410 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-30 00:09:21.588666 | orchestrator | changed: [testbed-manager] 2025-05-30 00:09:21.588710 | orchestrator | 2025-05-30 00:09:21.588719 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-30 00:09:22.702669 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-30 00:09:22.702882 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-30 00:09:22.702901 | orchestrator | 2025-05-30 00:09:22.702914 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-30 00:09:22.747555 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-30 00:09:22.747633 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-30 00:09:22.747647 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-30 00:09:22.747659 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-30 00:09:25.873952 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-30 00:09:25.874130 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-30 00:09:25.874151 | orchestrator | 2025-05-30 00:09:25.874164 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-30 00:09:26.449923 | orchestrator | changed: [testbed-manager] 2025-05-30 00:09:26.449968 | orchestrator | 2025-05-30 00:09:26.449977 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-30 00:10:46.398292 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-30 00:10:46.398412 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-30 00:10:46.398442 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-30 00:10:46.398465 | orchestrator | 2025-05-30 00:10:46.398489 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-30 00:10:48.727978 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-30 00:10:48.728083 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-30 00:10:48.728100 | orchestrator | 2025-05-30 00:10:48.728113 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-30 00:10:48.728126 | orchestrator | 2025-05-30 00:10:48.728137 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-30 00:10:50.210402 | orchestrator | ok: [testbed-manager] 2025-05-30 00:10:50.210491 | orchestrator | 2025-05-30 00:10:50.210510 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-30 00:10:50.253293 | orchestrator | ok: [testbed-manager] 2025-05-30 00:10:50.253345 | orchestrator | 2025-05-30 00:10:50.253351 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-30 00:10:50.325842 | orchestrator | ok: [testbed-manager] 2025-05-30 00:10:50.325943 | orchestrator | 2025-05-30 00:10:50.325959 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-30 00:10:51.140365 | orchestrator | changed: [testbed-manager] 2025-05-30 00:10:51.140440 | orchestrator | 2025-05-30 00:10:51.140451 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-30 00:10:51.865402 | orchestrator | changed: [testbed-manager] 2025-05-30 00:10:51.865491 | orchestrator | 2025-05-30 00:10:51.865507 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-30 00:10:53.260375 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-30 00:10:53.260450 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-30 00:10:53.260469 | orchestrator | 2025-05-30 00:10:53.260505 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-30 00:10:54.603487 | orchestrator | changed: [testbed-manager] 2025-05-30 00:10:54.603533 | orchestrator | 2025-05-30 00:10:54.603541 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-30 00:10:56.351705 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-30 00:10:56.351785 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-30 00:10:56.351799 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-30 00:10:56.351811 | orchestrator | 2025-05-30 00:10:56.351823 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-30 00:10:56.893801 | orchestrator | changed: [testbed-manager] 2025-05-30 00:10:56.893885 | orchestrator | 2025-05-30 00:10:56.893904 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-30 00:10:56.960482 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:10:56.960550 | orchestrator | 2025-05-30 00:10:56.960563 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-30 00:10:57.798678 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-30 00:10:57.798753 | orchestrator | changed: [testbed-manager] 2025-05-30 00:10:57.798767 | orchestrator | 2025-05-30 00:10:57.798780 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-30 00:10:57.838786 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:10:57.838864 | orchestrator | 2025-05-30 00:10:57.838880 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-30 00:10:57.879420 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:10:57.879495 | orchestrator | 2025-05-30 00:10:57.879511 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-30 00:10:57.914547 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:10:57.914612 | orchestrator | 2025-05-30 00:10:57.914626 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-30 00:10:57.964392 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:10:57.964433 | orchestrator | 2025-05-30 00:10:57.964445 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-30 00:10:58.667850 | orchestrator | ok: [testbed-manager] 2025-05-30 00:10:58.667950 | orchestrator | 2025-05-30 00:10:58.667974 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-30 00:10:58.668013 | orchestrator | 2025-05-30 00:10:58.668034 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-30 00:11:00.073397 | orchestrator | ok: [testbed-manager] 2025-05-30 00:11:00.073482 | orchestrator | 2025-05-30 00:11:00.073497 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-30 00:11:01.048740 | orchestrator | changed: [testbed-manager] 2025-05-30 00:11:01.048780 | orchestrator | 2025-05-30 00:11:01.048786 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:11:01.048792 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-30 00:11:01.048797 | orchestrator | 2025-05-30 00:11:01.615770 | orchestrator | ok: Runtime: 0:06:59.029315 2025-05-30 00:11:01.635854 | 2025-05-30 00:11:01.636019 | TASK [Point out that the log in on the manager is now possible] 2025-05-30 00:11:01.685619 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-30 00:11:01.696713 | 2025-05-30 00:11:01.696847 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-30 00:11:01.745031 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-30 00:11:01.754515 | 2025-05-30 00:11:01.754635 | TASK [Run manager part 1 + 2] 2025-05-30 00:11:02.603454 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-30 00:11:02.657369 | orchestrator | 2025-05-30 00:11:02.657421 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-30 00:11:02.657429 | orchestrator | 2025-05-30 00:11:02.657442 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-30 00:11:05.545096 | orchestrator | ok: [testbed-manager] 2025-05-30 00:11:05.545195 | orchestrator | 2025-05-30 00:11:05.545257 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-30 00:11:05.580802 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:11:05.580869 | orchestrator | 2025-05-30 00:11:05.580882 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-30 00:11:05.620107 | orchestrator | ok: [testbed-manager] 2025-05-30 00:11:05.620170 | orchestrator | 2025-05-30 00:11:05.620181 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-30 00:11:05.658626 | orchestrator | ok: [testbed-manager] 2025-05-30 00:11:05.658686 | orchestrator | 2025-05-30 00:11:05.658694 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-30 00:11:05.724444 | orchestrator | ok: [testbed-manager] 2025-05-30 00:11:05.724506 | orchestrator | 2025-05-30 00:11:05.724514 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-30 00:11:05.790711 | orchestrator | ok: [testbed-manager] 2025-05-30 00:11:05.790770 | orchestrator | 2025-05-30 00:11:05.790778 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-30 00:11:05.844265 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-30 00:11:05.844351 | orchestrator | 2025-05-30 00:11:05.844366 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-30 00:11:06.560945 | orchestrator | ok: [testbed-manager] 2025-05-30 00:11:06.561130 | orchestrator | 2025-05-30 00:11:06.561151 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-30 00:11:06.605555 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:11:06.605617 | orchestrator | 2025-05-30 00:11:06.605626 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-30 00:11:07.985093 | orchestrator | changed: [testbed-manager] 2025-05-30 00:11:07.985186 | orchestrator | 2025-05-30 00:11:07.985203 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-30 00:11:08.827321 | orchestrator | ok: [testbed-manager] 2025-05-30 00:11:08.827383 | orchestrator | 2025-05-30 00:11:08.827390 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-30 00:11:09.976372 | orchestrator | changed: [testbed-manager] 2025-05-30 00:11:09.976463 | orchestrator | 2025-05-30 00:11:09.976486 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-30 00:11:23.121319 | orchestrator | changed: [testbed-manager] 2025-05-30 00:11:23.121395 | orchestrator | 2025-05-30 00:11:23.121411 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-30 00:11:23.804162 | orchestrator | ok: [testbed-manager] 2025-05-30 00:11:23.804258 | orchestrator | 2025-05-30 00:11:23.804279 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-30 00:11:23.864838 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:11:23.864934 | orchestrator | 2025-05-30 00:11:23.864950 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-30 00:11:24.838080 | orchestrator | changed: [testbed-manager] 2025-05-30 00:11:24.838186 | orchestrator | 2025-05-30 00:11:24.838211 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-30 00:11:25.772817 | orchestrator | changed: [testbed-manager] 2025-05-30 00:11:25.772862 | orchestrator | 2025-05-30 00:11:25.772871 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-30 00:11:26.351112 | orchestrator | changed: [testbed-manager] 2025-05-30 00:11:26.351205 | orchestrator | 2025-05-30 00:11:26.351222 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-30 00:11:26.394267 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-30 00:11:26.394372 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-30 00:11:26.394386 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-30 00:11:26.394398 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-30 00:11:28.397078 | orchestrator | changed: [testbed-manager] 2025-05-30 00:11:28.397187 | orchestrator | 2025-05-30 00:11:28.397208 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-30 00:11:37.518292 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-30 00:11:37.518407 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-30 00:11:37.518427 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-30 00:11:37.518440 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-30 00:11:37.518460 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-30 00:11:37.518471 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-30 00:11:37.518483 | orchestrator | 2025-05-30 00:11:37.518495 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-30 00:11:38.572587 | orchestrator | changed: [testbed-manager] 2025-05-30 00:11:38.572679 | orchestrator | 2025-05-30 00:11:38.572696 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-30 00:11:38.618251 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:11:38.618334 | orchestrator | 2025-05-30 00:11:38.618350 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-30 00:11:41.712480 | orchestrator | changed: [testbed-manager] 2025-05-30 00:11:41.712553 | orchestrator | 2025-05-30 00:11:41.712569 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-30 00:11:41.755403 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:11:41.755484 | orchestrator | 2025-05-30 00:11:41.755499 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-30 00:13:26.451793 | orchestrator | changed: [testbed-manager] 2025-05-30 00:13:26.451896 | orchestrator | 2025-05-30 00:13:26.451917 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-30 00:13:27.576964 | orchestrator | ok: [testbed-manager] 2025-05-30 00:13:27.577033 | orchestrator | 2025-05-30 00:13:27.577049 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:13:27.577092 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-30 00:13:27.577106 | orchestrator | 2025-05-30 00:13:27.887342 | orchestrator | ok: Runtime: 0:02:25.639548 2025-05-30 00:13:27.902542 | 2025-05-30 00:13:27.902672 | TASK [Reboot manager] 2025-05-30 00:13:29.442415 | orchestrator | ok: Runtime: 0:00:00.952746 2025-05-30 00:13:29.458064 | 2025-05-30 00:13:29.458224 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-30 00:13:44.886932 | orchestrator | ok 2025-05-30 00:13:44.897692 | 2025-05-30 00:13:44.897844 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-30 00:14:44.946906 | orchestrator | ok 2025-05-30 00:14:44.956575 | 2025-05-30 00:14:44.956711 | TASK [Deploy manager + bootstrap nodes] 2025-05-30 00:14:50.111052 | orchestrator | 2025-05-30 00:14:50.112252 | orchestrator | # DEPLOY MANAGER 2025-05-30 00:14:50.112296 | orchestrator | 2025-05-30 00:14:50.112312 | orchestrator | + set -e 2025-05-30 00:14:50.112326 | orchestrator | + echo 2025-05-30 00:14:50.112340 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-30 00:14:50.112360 | orchestrator | + echo 2025-05-30 00:14:50.112411 | orchestrator | + cat /opt/manager-vars.sh 2025-05-30 00:14:50.115005 | orchestrator | export NUMBER_OF_NODES=6 2025-05-30 00:14:50.115093 | orchestrator | 2025-05-30 00:14:50.115108 | orchestrator | export CEPH_VERSION=reef 2025-05-30 00:14:50.115119 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-30 00:14:50.115131 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-30 00:14:50.115159 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-30 00:14:50.115172 | orchestrator | 2025-05-30 00:14:50.115222 | orchestrator | export ARA=false 2025-05-30 00:14:50.115233 | orchestrator | export TEMPEST=false 2025-05-30 00:14:50.115247 | orchestrator | export IS_ZUUL=true 2025-05-30 00:14:50.115255 | orchestrator | 2025-05-30 00:14:50.115269 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-05-30 00:14:50.115278 | orchestrator | export EXTERNAL_API=false 2025-05-30 00:14:50.115286 | orchestrator | 2025-05-30 00:14:50.115303 | orchestrator | export IMAGE_USER=ubuntu 2025-05-30 00:14:50.115311 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-30 00:14:50.115319 | orchestrator | 2025-05-30 00:14:50.115329 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-30 00:14:50.115347 | orchestrator | 2025-05-30 00:14:50.115356 | orchestrator | + echo 2025-05-30 00:14:50.115364 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-30 00:14:50.116502 | orchestrator | ++ export INTERACTIVE=false 2025-05-30 00:14:50.116576 | orchestrator | ++ INTERACTIVE=false 2025-05-30 00:14:50.116589 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-30 00:14:50.116601 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-30 00:14:50.116905 | orchestrator | + source /opt/manager-vars.sh 2025-05-30 00:14:50.116929 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-30 00:14:50.116940 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-30 00:14:50.116951 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-30 00:14:50.116962 | orchestrator | ++ CEPH_VERSION=reef 2025-05-30 00:14:50.116974 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-30 00:14:50.116986 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-30 00:14:50.116997 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-30 00:14:50.117008 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-30 00:14:50.117019 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-30 00:14:50.117030 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-30 00:14:50.117041 | orchestrator | ++ export ARA=false 2025-05-30 00:14:50.117052 | orchestrator | ++ ARA=false 2025-05-30 00:14:50.117111 | orchestrator | ++ export TEMPEST=false 2025-05-30 00:14:50.117125 | orchestrator | ++ TEMPEST=false 2025-05-30 00:14:50.117136 | orchestrator | ++ export IS_ZUUL=true 2025-05-30 00:14:50.117147 | orchestrator | ++ IS_ZUUL=true 2025-05-30 00:14:50.117161 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-05-30 00:14:50.117186 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-05-30 00:14:50.117220 | orchestrator | ++ export EXTERNAL_API=false 2025-05-30 00:14:50.117238 | orchestrator | ++ EXTERNAL_API=false 2025-05-30 00:14:50.117255 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-30 00:14:50.117273 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-30 00:14:50.117289 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-30 00:14:50.117308 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-30 00:14:50.117326 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-30 00:14:50.117346 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-30 00:14:50.117364 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-30 00:14:50.178551 | orchestrator | + docker version 2025-05-30 00:14:50.470994 | orchestrator | Client: Docker Engine - Community 2025-05-30 00:14:50.471153 | orchestrator | Version: 26.1.4 2025-05-30 00:14:50.471172 | orchestrator | API version: 1.45 2025-05-30 00:14:50.471184 | orchestrator | Go version: go1.21.11 2025-05-30 00:14:50.471194 | orchestrator | Git commit: 5650f9b 2025-05-30 00:14:50.471206 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-30 00:14:50.471217 | orchestrator | OS/Arch: linux/amd64 2025-05-30 00:14:50.471229 | orchestrator | Context: default 2025-05-30 00:14:50.471240 | orchestrator | 2025-05-30 00:14:50.471251 | orchestrator | Server: Docker Engine - Community 2025-05-30 00:14:50.471262 | orchestrator | Engine: 2025-05-30 00:14:50.471274 | orchestrator | Version: 26.1.4 2025-05-30 00:14:50.471285 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-30 00:14:50.471296 | orchestrator | Go version: go1.21.11 2025-05-30 00:14:50.471306 | orchestrator | Git commit: de5c9cf 2025-05-30 00:14:50.471348 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-30 00:14:50.471360 | orchestrator | OS/Arch: linux/amd64 2025-05-30 00:14:50.471371 | orchestrator | Experimental: false 2025-05-30 00:14:50.471382 | orchestrator | containerd: 2025-05-30 00:14:50.471393 | orchestrator | Version: 1.7.27 2025-05-30 00:14:50.471404 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-30 00:14:50.471415 | orchestrator | runc: 2025-05-30 00:14:50.471426 | orchestrator | Version: 1.2.5 2025-05-30 00:14:50.471437 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-30 00:14:50.471448 | orchestrator | docker-init: 2025-05-30 00:14:50.471458 | orchestrator | Version: 0.19.0 2025-05-30 00:14:50.471469 | orchestrator | GitCommit: de40ad0 2025-05-30 00:14:50.476177 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-30 00:14:50.486923 | orchestrator | + set -e 2025-05-30 00:14:50.486962 | orchestrator | + source /opt/manager-vars.sh 2025-05-30 00:14:50.486974 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-30 00:14:50.486985 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-30 00:14:50.486995 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-30 00:14:50.487006 | orchestrator | ++ CEPH_VERSION=reef 2025-05-30 00:14:50.487017 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-30 00:14:50.487030 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-30 00:14:50.487040 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-30 00:14:50.487051 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-30 00:14:50.487062 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-30 00:14:50.487102 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-30 00:14:50.487115 | orchestrator | ++ export ARA=false 2025-05-30 00:14:50.487126 | orchestrator | ++ ARA=false 2025-05-30 00:14:50.487136 | orchestrator | ++ export TEMPEST=false 2025-05-30 00:14:50.487147 | orchestrator | ++ TEMPEST=false 2025-05-30 00:14:50.487158 | orchestrator | ++ export IS_ZUUL=true 2025-05-30 00:14:50.487168 | orchestrator | ++ IS_ZUUL=true 2025-05-30 00:14:50.487179 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-05-30 00:14:50.487191 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-05-30 00:14:50.487202 | orchestrator | ++ export EXTERNAL_API=false 2025-05-30 00:14:50.487212 | orchestrator | ++ EXTERNAL_API=false 2025-05-30 00:14:50.487223 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-30 00:14:50.487233 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-30 00:14:50.487244 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-30 00:14:50.487255 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-30 00:14:50.487265 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-30 00:14:50.487276 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-30 00:14:50.487287 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-30 00:14:50.487297 | orchestrator | ++ export INTERACTIVE=false 2025-05-30 00:14:50.487308 | orchestrator | ++ INTERACTIVE=false 2025-05-30 00:14:50.487318 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-30 00:14:50.487329 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-30 00:14:50.487347 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-30 00:14:50.487359 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-30 00:14:50.495471 | orchestrator | + set -e 2025-05-30 00:14:50.495504 | orchestrator | + VERSION=8.1.0 2025-05-30 00:14:50.495519 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-30 00:14:50.503973 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-30 00:14:50.504005 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-30 00:14:50.507644 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-30 00:14:50.513404 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-30 00:14:50.520967 | orchestrator | /opt/configuration ~ 2025-05-30 00:14:50.521001 | orchestrator | + set -e 2025-05-30 00:14:50.521013 | orchestrator | + pushd /opt/configuration 2025-05-30 00:14:50.521024 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-30 00:14:50.524244 | orchestrator | + source /opt/venv/bin/activate 2025-05-30 00:14:50.525323 | orchestrator | ++ deactivate nondestructive 2025-05-30 00:14:50.525375 | orchestrator | ++ '[' -n '' ']' 2025-05-30 00:14:50.525396 | orchestrator | ++ '[' -n '' ']' 2025-05-30 00:14:50.525420 | orchestrator | ++ hash -r 2025-05-30 00:14:50.525431 | orchestrator | ++ '[' -n '' ']' 2025-05-30 00:14:50.525442 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-30 00:14:50.525465 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-30 00:14:50.525476 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-30 00:14:50.525522 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-30 00:14:50.525577 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-30 00:14:50.525601 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-30 00:14:50.525622 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-30 00:14:50.525642 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-30 00:14:50.525663 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-30 00:14:50.525683 | orchestrator | ++ export PATH 2025-05-30 00:14:50.525702 | orchestrator | ++ '[' -n '' ']' 2025-05-30 00:14:50.525721 | orchestrator | ++ '[' -z '' ']' 2025-05-30 00:14:50.525741 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-30 00:14:50.525760 | orchestrator | ++ PS1='(venv) ' 2025-05-30 00:14:50.525779 | orchestrator | ++ export PS1 2025-05-30 00:14:50.525798 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-30 00:14:50.525811 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-30 00:14:50.525827 | orchestrator | ++ hash -r 2025-05-30 00:14:50.525854 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-30 00:14:51.658499 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-30 00:14:51.659664 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-30 00:14:51.661408 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-30 00:14:51.662539 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-30 00:14:51.663695 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-30 00:14:51.675573 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-05-30 00:14:51.677849 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-30 00:14:51.679626 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-30 00:14:51.681870 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-30 00:14:51.732164 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-30 00:14:51.733720 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-30 00:14:51.735239 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-30 00:14:51.736600 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-30 00:14:51.740860 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-30 00:14:52.018711 | orchestrator | ++ which gilt 2025-05-30 00:14:52.024301 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-30 00:14:52.024345 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-30 00:14:52.245232 | orchestrator | osism.cfg-generics: 2025-05-30 00:14:52.245327 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-30 00:14:53.835198 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-30 00:14:53.835299 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-30 00:14:53.835521 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-30 00:14:53.835551 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-30 00:14:54.725446 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-30 00:14:54.739179 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-30 00:14:55.242701 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-30 00:14:55.296259 | orchestrator | ~ 2025-05-30 00:14:55.296341 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-30 00:14:55.296356 | orchestrator | + deactivate 2025-05-30 00:14:55.296369 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-30 00:14:55.296382 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-30 00:14:55.296393 | orchestrator | + export PATH 2025-05-30 00:14:55.296480 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-30 00:14:55.296550 | orchestrator | + '[' -n '' ']' 2025-05-30 00:14:55.296580 | orchestrator | + hash -r 2025-05-30 00:14:55.296598 | orchestrator | + '[' -n '' ']' 2025-05-30 00:14:55.296616 | orchestrator | + unset VIRTUAL_ENV 2025-05-30 00:14:55.296636 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-30 00:14:55.296650 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-30 00:14:55.296661 | orchestrator | + unset -f deactivate 2025-05-30 00:14:55.296672 | orchestrator | + popd 2025-05-30 00:14:55.298300 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-30 00:14:55.298334 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-30 00:14:55.299273 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-30 00:14:55.362367 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-30 00:14:55.362439 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-30 00:14:55.362453 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-30 00:14:55.408799 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-30 00:14:55.408908 | orchestrator | + source /opt/venv/bin/activate 2025-05-30 00:14:55.408935 | orchestrator | ++ deactivate nondestructive 2025-05-30 00:14:55.408947 | orchestrator | ++ '[' -n '' ']' 2025-05-30 00:14:55.408959 | orchestrator | ++ '[' -n '' ']' 2025-05-30 00:14:55.408970 | orchestrator | ++ hash -r 2025-05-30 00:14:55.408981 | orchestrator | ++ '[' -n '' ']' 2025-05-30 00:14:55.408992 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-30 00:14:55.409002 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-30 00:14:55.409013 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-30 00:14:55.409251 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-30 00:14:55.409272 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-30 00:14:55.409292 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-30 00:14:55.409303 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-30 00:14:55.409314 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-30 00:14:55.409339 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-30 00:14:55.409350 | orchestrator | ++ export PATH 2025-05-30 00:14:55.409372 | orchestrator | ++ '[' -n '' ']' 2025-05-30 00:14:55.409388 | orchestrator | ++ '[' -z '' ']' 2025-05-30 00:14:55.409399 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-30 00:14:55.409410 | orchestrator | ++ PS1='(venv) ' 2025-05-30 00:14:55.409432 | orchestrator | ++ export PS1 2025-05-30 00:14:55.409443 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-30 00:14:55.409458 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-30 00:14:55.409472 | orchestrator | ++ hash -r 2025-05-30 00:14:55.409659 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-30 00:14:56.611484 | orchestrator | 2025-05-30 00:14:56.611606 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-30 00:14:56.611625 | orchestrator | 2025-05-30 00:14:56.611637 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-30 00:14:57.152714 | orchestrator | ok: [testbed-manager] 2025-05-30 00:14:57.152806 | orchestrator | 2025-05-30 00:14:57.152821 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-30 00:14:58.138621 | orchestrator | changed: [testbed-manager] 2025-05-30 00:14:58.138729 | orchestrator | 2025-05-30 00:14:58.138746 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-30 00:14:58.138760 | orchestrator | 2025-05-30 00:14:58.138772 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-30 00:15:02.191210 | orchestrator | ok: [testbed-manager] 2025-05-30 00:15:02.191311 | orchestrator | 2025-05-30 00:15:02.191328 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-30 00:15:06.942601 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-30 00:15:06.942712 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.6.2) 2025-05-30 00:15:06.942728 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-30 00:15:06.942741 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-30 00:15:06.942752 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-30 00:15:06.942767 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.1-alpine) 2025-05-30 00:15:06.942779 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-30 00:15:06.942792 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-30 00:15:06.942803 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-30 00:15:06.942814 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.6-alpine) 2025-05-30 00:15:06.942825 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.2.1) 2025-05-30 00:15:06.942836 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.18.2) 2025-05-30 00:15:06.942847 | orchestrator | 2025-05-30 00:15:06.942859 | orchestrator | TASK [Check status] ************************************************************ 2025-05-30 00:16:33.585393 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-30 00:16:33.585498 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-30 00:16:33.585514 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-30 00:16:33.585526 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-30 00:16:33.585537 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (116 retries left). 2025-05-30 00:16:33.585559 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j70682643109.1591', 'results_file': '/home/dragon/.ansible_async/j70682643109.1591', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-30 00:16:33.585578 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j629500044933.1616', 'results_file': '/home/dragon/.ansible_async/j629500044933.1616', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-30 00:16:33.585594 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-30 00:16:33.585605 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j728070589832.1641', 'results_file': '/home/dragon/.ansible_async/j728070589832.1641', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-30 00:16:33.585617 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j846637783373.1673', 'results_file': '/home/dragon/.ansible_async/j846637783373.1673', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-30 00:16:33.585628 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-30 00:16:33.585639 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j320469396274.1705', 'results_file': '/home/dragon/.ansible_async/j320469396274.1705', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-30 00:16:33.585651 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j707113983982.1738', 'results_file': '/home/dragon/.ansible_async/j707113983982.1738', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-30 00:16:33.585742 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-30 00:16:33.585764 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j639117425609.1771', 'results_file': '/home/dragon/.ansible_async/j639117425609.1771', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-30 00:16:33.585776 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j211626876475.1803', 'results_file': '/home/dragon/.ansible_async/j211626876475.1803', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-30 00:16:33.585788 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j59961421834.1834', 'results_file': '/home/dragon/.ansible_async/j59961421834.1834', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-30 00:16:33.585799 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j396088283584.1866', 'results_file': '/home/dragon/.ansible_async/j396088283584.1866', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-30 00:16:33.585810 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j82394648459.1898', 'results_file': '/home/dragon/.ansible_async/j82394648459.1898', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-30 00:16:33.585821 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j999675050808.1930', 'results_file': '/home/dragon/.ansible_async/j999675050808.1930', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-30 00:16:33.585832 | orchestrator | 2025-05-30 00:16:33.585861 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-30 00:16:33.638850 | orchestrator | ok: [testbed-manager] 2025-05-30 00:16:33.638909 | orchestrator | 2025-05-30 00:16:33.638923 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-30 00:16:34.136813 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:34.136904 | orchestrator | 2025-05-30 00:16:34.136921 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-30 00:16:34.477325 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:34.477418 | orchestrator | 2025-05-30 00:16:34.477434 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-30 00:16:34.805085 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:34.805189 | orchestrator | 2025-05-30 00:16:34.805203 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-30 00:16:34.853214 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:16:34.853269 | orchestrator | 2025-05-30 00:16:34.853281 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-30 00:16:35.245683 | orchestrator | ok: [testbed-manager] 2025-05-30 00:16:35.245764 | orchestrator | 2025-05-30 00:16:35.245779 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-30 00:16:35.360428 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:16:35.360507 | orchestrator | 2025-05-30 00:16:35.360521 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-30 00:16:35.360533 | orchestrator | 2025-05-30 00:16:35.360545 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-30 00:16:37.211531 | orchestrator | ok: [testbed-manager] 2025-05-30 00:16:37.211638 | orchestrator | 2025-05-30 00:16:37.211655 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-30 00:16:37.299216 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-30 00:16:37.299315 | orchestrator | 2025-05-30 00:16:37.299332 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-30 00:16:37.357026 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-30 00:16:37.357154 | orchestrator | 2025-05-30 00:16:37.357169 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-30 00:16:38.505995 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-30 00:16:38.506177 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-30 00:16:38.506194 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-30 00:16:38.506206 | orchestrator | 2025-05-30 00:16:38.506235 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-30 00:16:40.342994 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-30 00:16:40.343168 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-30 00:16:40.343187 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-30 00:16:40.343200 | orchestrator | 2025-05-30 00:16:40.343213 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-30 00:16:40.924218 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-30 00:16:40.924298 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:40.924306 | orchestrator | 2025-05-30 00:16:40.924313 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-30 00:16:41.518400 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-30 00:16:41.518508 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:41.518525 | orchestrator | 2025-05-30 00:16:41.518538 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-30 00:16:41.571415 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:16:41.571487 | orchestrator | 2025-05-30 00:16:41.571495 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-30 00:16:41.910451 | orchestrator | ok: [testbed-manager] 2025-05-30 00:16:41.910558 | orchestrator | 2025-05-30 00:16:41.910575 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-30 00:16:41.967291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-30 00:16:41.967383 | orchestrator | 2025-05-30 00:16:41.967398 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-30 00:16:43.025518 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:43.025626 | orchestrator | 2025-05-30 00:16:43.025643 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-30 00:16:43.828126 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:43.828265 | orchestrator | 2025-05-30 00:16:43.828282 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-30 00:16:48.035267 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:48.035384 | orchestrator | 2025-05-30 00:16:48.035403 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-30 00:16:48.139717 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-30 00:16:48.139820 | orchestrator | 2025-05-30 00:16:48.139836 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-30 00:16:48.213082 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-30 00:16:48.213266 | orchestrator | 2025-05-30 00:16:48.213294 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-30 00:16:50.896670 | orchestrator | ok: [testbed-manager] 2025-05-30 00:16:50.896801 | orchestrator | 2025-05-30 00:16:50.896820 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-30 00:16:51.009039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-30 00:16:51.009193 | orchestrator | 2025-05-30 00:16:51.009210 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-30 00:16:52.124593 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-30 00:16:52.124700 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-30 00:16:52.124748 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-30 00:16:52.124762 | orchestrator | 2025-05-30 00:16:52.124775 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-30 00:16:52.194547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-30 00:16:52.194646 | orchestrator | 2025-05-30 00:16:52.194660 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-30 00:16:52.823626 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-30 00:16:52.823729 | orchestrator | 2025-05-30 00:16:52.823747 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-30 00:16:53.451519 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:53.451629 | orchestrator | 2025-05-30 00:16:53.451648 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-30 00:16:54.120793 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-30 00:16:54.120901 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:54.120918 | orchestrator | 2025-05-30 00:16:54.120931 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-30 00:16:54.508448 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:54.508520 | orchestrator | 2025-05-30 00:16:54.508533 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-30 00:16:54.883901 | orchestrator | ok: [testbed-manager] 2025-05-30 00:16:54.883996 | orchestrator | 2025-05-30 00:16:54.884012 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-30 00:16:54.938078 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:16:54.938207 | orchestrator | 2025-05-30 00:16:54.938222 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-30 00:16:55.551987 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:55.552123 | orchestrator | 2025-05-30 00:16:55.552144 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-30 00:16:55.620750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-30 00:16:55.620832 | orchestrator | 2025-05-30 00:16:55.620847 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-30 00:16:56.399259 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-30 00:16:56.399349 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-30 00:16:56.399368 | orchestrator | 2025-05-30 00:16:56.399385 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-30 00:16:57.097559 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-30 00:16:57.097673 | orchestrator | 2025-05-30 00:16:57.097690 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-30 00:16:57.742833 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:57.743004 | orchestrator | 2025-05-30 00:16:57.743037 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-30 00:16:57.790927 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:16:57.791028 | orchestrator | 2025-05-30 00:16:57.791044 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-30 00:16:58.499663 | orchestrator | changed: [testbed-manager] 2025-05-30 00:16:58.499793 | orchestrator | 2025-05-30 00:16:58.499821 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-30 00:17:00.400807 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-30 00:17:00.400925 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-30 00:17:00.400941 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-30 00:17:00.400954 | orchestrator | changed: [testbed-manager] 2025-05-30 00:17:00.400967 | orchestrator | 2025-05-30 00:17:00.400979 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-30 00:17:06.492768 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-30 00:17:06.492919 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-30 00:17:06.492941 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-30 00:17:06.492981 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-30 00:17:06.492994 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-30 00:17:06.493005 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-30 00:17:06.493016 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-30 00:17:06.493027 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-30 00:17:06.493038 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-30 00:17:06.493049 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-30 00:17:06.493060 | orchestrator | 2025-05-30 00:17:06.493091 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-30 00:17:07.141460 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-30 00:17:07.141565 | orchestrator | 2025-05-30 00:17:07.141581 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-30 00:17:07.235014 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-30 00:17:07.235141 | orchestrator | 2025-05-30 00:17:07.235156 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-30 00:17:07.939655 | orchestrator | changed: [testbed-manager] 2025-05-30 00:17:07.939764 | orchestrator | 2025-05-30 00:17:07.939781 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-30 00:17:08.595467 | orchestrator | ok: [testbed-manager] 2025-05-30 00:17:08.595595 | orchestrator | 2025-05-30 00:17:08.595614 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-30 00:17:09.317714 | orchestrator | changed: [testbed-manager] 2025-05-30 00:17:09.317802 | orchestrator | 2025-05-30 00:17:09.317813 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-30 00:17:11.917208 | orchestrator | ok: [testbed-manager] 2025-05-30 00:17:11.917322 | orchestrator | 2025-05-30 00:17:11.917340 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-30 00:17:12.897559 | orchestrator | ok: [testbed-manager] 2025-05-30 00:17:12.897658 | orchestrator | 2025-05-30 00:17:12.897673 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-30 00:17:35.035963 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-30 00:17:35.036087 | orchestrator | ok: [testbed-manager] 2025-05-30 00:17:35.036105 | orchestrator | 2025-05-30 00:17:35.036170 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-30 00:17:35.091803 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:17:35.091908 | orchestrator | 2025-05-30 00:17:35.091926 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-30 00:17:35.091939 | orchestrator | 2025-05-30 00:17:35.091950 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-30 00:17:35.139903 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:17:35.139991 | orchestrator | 2025-05-30 00:17:35.140004 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-30 00:17:35.219489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-30 00:17:35.219584 | orchestrator | 2025-05-30 00:17:35.219599 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-30 00:17:36.019822 | orchestrator | ok: [testbed-manager] 2025-05-30 00:17:36.019920 | orchestrator | 2025-05-30 00:17:36.019945 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-30 00:17:36.090602 | orchestrator | ok: [testbed-manager] 2025-05-30 00:17:36.090685 | orchestrator | 2025-05-30 00:17:36.090700 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-30 00:17:36.148251 | orchestrator | ok: [testbed-manager] => { 2025-05-30 00:17:36.148330 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-30 00:17:36.148344 | orchestrator | } 2025-05-30 00:17:36.148356 | orchestrator | 2025-05-30 00:17:36.148367 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-30 00:17:36.775506 | orchestrator | ok: [testbed-manager] 2025-05-30 00:17:36.775601 | orchestrator | 2025-05-30 00:17:36.775617 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-30 00:17:37.628494 | orchestrator | ok: [testbed-manager] 2025-05-30 00:17:37.628586 | orchestrator | 2025-05-30 00:17:37.628601 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-30 00:17:37.694091 | orchestrator | ok: [testbed-manager] 2025-05-30 00:17:37.694191 | orchestrator | 2025-05-30 00:17:37.694207 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-30 00:17:37.742216 | orchestrator | ok: [testbed-manager] => { 2025-05-30 00:17:37.742284 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-30 00:17:37.742295 | orchestrator | } 2025-05-30 00:17:37.742303 | orchestrator | 2025-05-30 00:17:37.742311 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-30 00:17:37.806810 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:17:37.806872 | orchestrator | 2025-05-30 00:17:37.806905 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-30 00:17:37.861742 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:17:37.861813 | orchestrator | 2025-05-30 00:17:37.861826 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-30 00:17:37.912968 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:17:37.913019 | orchestrator | 2025-05-30 00:17:37.913032 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-30 00:17:37.967728 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:17:37.967793 | orchestrator | 2025-05-30 00:17:37.967807 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-30 00:17:38.023032 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:17:38.023102 | orchestrator | 2025-05-30 00:17:38.023162 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-30 00:17:38.070180 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:17:38.070255 | orchestrator | 2025-05-30 00:17:38.070272 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-30 00:17:39.480381 | orchestrator | changed: [testbed-manager] 2025-05-30 00:17:39.480470 | orchestrator | 2025-05-30 00:17:39.480486 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-30 00:17:39.561984 | orchestrator | ok: [testbed-manager] 2025-05-30 00:17:39.562169 | orchestrator | 2025-05-30 00:17:39.562198 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-30 00:18:39.622672 | orchestrator | Pausing for 60 seconds 2025-05-30 00:18:39.622783 | orchestrator | changed: [testbed-manager] 2025-05-30 00:18:39.622796 | orchestrator | 2025-05-30 00:18:39.622807 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-30 00:18:39.680406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-30 00:18:39.680488 | orchestrator | 2025-05-30 00:18:39.680500 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-30 00:22:51.326669 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-30 00:22:51.326783 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-30 00:22:51.326797 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-30 00:22:51.326807 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-30 00:22:51.326818 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-30 00:22:51.326827 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-30 00:22:51.326837 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-30 00:22:51.326847 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-30 00:22:51.326879 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-30 00:22:51.326890 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-30 00:22:51.326899 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-30 00:22:51.326909 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-30 00:22:51.326918 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-30 00:22:51.326928 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-30 00:22:51.326938 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-30 00:22:51.326948 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-30 00:22:51.326957 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-30 00:22:51.326983 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-30 00:22:51.326994 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-30 00:22:51.327004 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-30 00:22:51.327013 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-30 00:22:51.327023 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-05-30 00:22:51.327032 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-05-30 00:22:51.327042 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-05-30 00:22:51.327052 | orchestrator | changed: [testbed-manager] 2025-05-30 00:22:51.327063 | orchestrator | 2025-05-30 00:22:51.327074 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-30 00:22:51.327084 | orchestrator | 2025-05-30 00:22:51.327094 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-30 00:22:53.402863 | orchestrator | ok: [testbed-manager] 2025-05-30 00:22:53.402988 | orchestrator | 2025-05-30 00:22:53.403005 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-30 00:22:53.505853 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-30 00:22:53.505952 | orchestrator | 2025-05-30 00:22:53.505965 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-30 00:22:53.566102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-30 00:22:53.566263 | orchestrator | 2025-05-30 00:22:53.566281 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-30 00:22:55.434128 | orchestrator | ok: [testbed-manager] 2025-05-30 00:22:55.434276 | orchestrator | 2025-05-30 00:22:55.434294 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-30 00:22:55.496314 | orchestrator | ok: [testbed-manager] 2025-05-30 00:22:55.496404 | orchestrator | 2025-05-30 00:22:55.496418 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-30 00:22:55.602096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-30 00:22:55.602245 | orchestrator | 2025-05-30 00:22:55.602266 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-30 00:22:58.493688 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-30 00:22:58.493833 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-30 00:22:58.493850 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-30 00:22:58.494696 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-30 00:22:58.494719 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-30 00:22:58.494733 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-30 00:22:58.494746 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-30 00:22:58.494757 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-30 00:22:58.494768 | orchestrator | 2025-05-30 00:22:58.494780 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-05-30 00:22:59.140310 | orchestrator | changed: [testbed-manager] 2025-05-30 00:22:59.140398 | orchestrator | 2025-05-30 00:22:59.140414 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-30 00:22:59.787835 | orchestrator | changed: [testbed-manager] 2025-05-30 00:22:59.787923 | orchestrator | 2025-05-30 00:22:59.787938 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-30 00:22:59.866284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-30 00:22:59.866360 | orchestrator | 2025-05-30 00:22:59.866374 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-30 00:23:01.082793 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-30 00:23:01.082882 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-30 00:23:01.082897 | orchestrator | 2025-05-30 00:23:01.082910 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-30 00:23:01.724124 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:01.724298 | orchestrator | 2025-05-30 00:23:01.724319 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-30 00:23:01.779524 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:23:01.779636 | orchestrator | 2025-05-30 00:23:01.779657 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-30 00:23:01.860718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-30 00:23:01.860817 | orchestrator | 2025-05-30 00:23:01.860832 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-30 00:23:03.242476 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-30 00:23:03.242582 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-30 00:23:03.242602 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:03.242616 | orchestrator | 2025-05-30 00:23:03.242629 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-30 00:23:03.852683 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:03.852801 | orchestrator | 2025-05-30 00:23:03.852817 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-30 00:23:03.933270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-30 00:23:03.933376 | orchestrator | 2025-05-30 00:23:03.933392 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-30 00:23:05.129973 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-30 00:23:05.130141 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-30 00:23:05.130158 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:05.130220 | orchestrator | 2025-05-30 00:23:05.130234 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-30 00:23:05.837045 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:05.837208 | orchestrator | 2025-05-30 00:23:05.837225 | orchestrator | TASK [osism.services.manager : Copy inventory-reconciler environment file] ***** 2025-05-30 00:23:06.471854 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:06.471964 | orchestrator | 2025-05-30 00:23:06.471981 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-30 00:23:06.597563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-30 00:23:06.597694 | orchestrator | 2025-05-30 00:23:06.597723 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-30 00:23:07.177119 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:07.177287 | orchestrator | 2025-05-30 00:23:07.177307 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-30 00:23:08.587574 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:08.587684 | orchestrator | 2025-05-30 00:23:08.587699 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-30 00:23:09.903754 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-30 00:23:09.903863 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-30 00:23:09.903880 | orchestrator | 2025-05-30 00:23:09.903894 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-30 00:23:10.638116 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:10.638254 | orchestrator | 2025-05-30 00:23:10.638273 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-30 00:23:11.018635 | orchestrator | ok: [testbed-manager] 2025-05-30 00:23:11.018740 | orchestrator | 2025-05-30 00:23:11.018758 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-30 00:23:11.378396 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:11.378507 | orchestrator | 2025-05-30 00:23:11.378522 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-30 00:23:11.430936 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:23:11.431034 | orchestrator | 2025-05-30 00:23:11.431049 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-30 00:23:11.503856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-30 00:23:11.503958 | orchestrator | 2025-05-30 00:23:11.503974 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-30 00:23:11.551436 | orchestrator | ok: [testbed-manager] 2025-05-30 00:23:11.551529 | orchestrator | 2025-05-30 00:23:11.551543 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-30 00:23:13.599854 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-30 00:23:13.599962 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-30 00:23:13.599978 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-30 00:23:13.599991 | orchestrator | 2025-05-30 00:23:13.600004 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-30 00:23:14.277446 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:14.277558 | orchestrator | 2025-05-30 00:23:14.277575 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-30 00:23:15.059728 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:15.059841 | orchestrator | 2025-05-30 00:23:15.059859 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-30 00:23:15.820393 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:15.820500 | orchestrator | 2025-05-30 00:23:15.820516 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-30 00:23:15.894621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-30 00:23:15.894711 | orchestrator | 2025-05-30 00:23:15.894725 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-30 00:23:15.937156 | orchestrator | ok: [testbed-manager] 2025-05-30 00:23:15.937311 | orchestrator | 2025-05-30 00:23:15.937325 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-30 00:23:16.665365 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-30 00:23:16.665474 | orchestrator | 2025-05-30 00:23:16.665490 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-30 00:23:16.752947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-30 00:23:16.753048 | orchestrator | 2025-05-30 00:23:16.753064 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-30 00:23:17.467420 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:17.467529 | orchestrator | 2025-05-30 00:23:17.467577 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-30 00:23:18.070423 | orchestrator | ok: [testbed-manager] 2025-05-30 00:23:18.070525 | orchestrator | 2025-05-30 00:23:18.070541 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-30 00:23:18.131694 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:23:18.131786 | orchestrator | 2025-05-30 00:23:18.131800 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-30 00:23:18.188591 | orchestrator | ok: [testbed-manager] 2025-05-30 00:23:18.188690 | orchestrator | 2025-05-30 00:23:18.188705 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-30 00:23:19.005930 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:19.006102 | orchestrator | 2025-05-30 00:23:19.006139 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-30 00:23:59.871467 | orchestrator | changed: [testbed-manager] 2025-05-30 00:23:59.871593 | orchestrator | 2025-05-30 00:23:59.871613 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-30 00:24:00.528941 | orchestrator | ok: [testbed-manager] 2025-05-30 00:24:00.529049 | orchestrator | 2025-05-30 00:24:00.529065 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-30 00:24:03.311795 | orchestrator | changed: [testbed-manager] 2025-05-30 00:24:03.311902 | orchestrator | 2025-05-30 00:24:03.311918 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-30 00:24:03.374314 | orchestrator | ok: [testbed-manager] 2025-05-30 00:24:03.374417 | orchestrator | 2025-05-30 00:24:03.374444 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-30 00:24:03.374461 | orchestrator | 2025-05-30 00:24:03.374478 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-30 00:24:03.432810 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:24:03.432912 | orchestrator | 2025-05-30 00:24:03.432926 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-30 00:25:03.488791 | orchestrator | Pausing for 60 seconds 2025-05-30 00:25:03.488942 | orchestrator | changed: [testbed-manager] 2025-05-30 00:25:03.488960 | orchestrator | 2025-05-30 00:25:03.488974 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-30 00:25:08.936770 | orchestrator | changed: [testbed-manager] 2025-05-30 00:25:08.936890 | orchestrator | 2025-05-30 00:25:08.936906 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-30 00:25:50.521066 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-30 00:25:50.521219 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-30 00:25:50.521238 | orchestrator | changed: [testbed-manager] 2025-05-30 00:25:50.521251 | orchestrator | 2025-05-30 00:25:50.521262 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-30 00:25:56.211316 | orchestrator | changed: [testbed-manager] 2025-05-30 00:25:56.211437 | orchestrator | 2025-05-30 00:25:56.211454 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-30 00:25:56.302851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-30 00:25:56.302948 | orchestrator | 2025-05-30 00:25:56.302963 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-30 00:25:56.302976 | orchestrator | 2025-05-30 00:25:56.302988 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-30 00:25:56.359793 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:25:56.359889 | orchestrator | 2025-05-30 00:25:56.359903 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:25:56.359916 | orchestrator | testbed-manager : ok=111 changed=59 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-30 00:25:56.359928 | orchestrator | 2025-05-30 00:25:56.472734 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-30 00:25:56.472818 | orchestrator | + deactivate 2025-05-30 00:25:56.473125 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-30 00:25:56.473371 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-30 00:25:56.473390 | orchestrator | + export PATH 2025-05-30 00:25:56.473403 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-30 00:25:56.473414 | orchestrator | + '[' -n '' ']' 2025-05-30 00:25:56.473425 | orchestrator | + hash -r 2025-05-30 00:25:56.473436 | orchestrator | + '[' -n '' ']' 2025-05-30 00:25:56.473446 | orchestrator | + unset VIRTUAL_ENV 2025-05-30 00:25:56.473456 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-30 00:25:56.473467 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-30 00:25:56.473478 | orchestrator | + unset -f deactivate 2025-05-30 00:25:56.473490 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-30 00:25:56.479836 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-30 00:25:56.479895 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-30 00:25:56.479910 | orchestrator | + local max_attempts=60 2025-05-30 00:25:56.479922 | orchestrator | + local name=ceph-ansible 2025-05-30 00:25:56.479934 | orchestrator | + local attempt_num=1 2025-05-30 00:25:56.480479 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-30 00:25:56.513635 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-30 00:25:56.513712 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-30 00:25:56.513726 | orchestrator | + local max_attempts=60 2025-05-30 00:25:56.513738 | orchestrator | + local name=kolla-ansible 2025-05-30 00:25:56.513750 | orchestrator | + local attempt_num=1 2025-05-30 00:25:56.514096 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-30 00:25:56.547698 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-30 00:25:56.547771 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-30 00:25:56.547784 | orchestrator | + local max_attempts=60 2025-05-30 00:25:56.547796 | orchestrator | + local name=osism-ansible 2025-05-30 00:25:56.547807 | orchestrator | + local attempt_num=1 2025-05-30 00:25:56.548622 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-30 00:25:56.577911 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-30 00:25:56.577976 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-30 00:25:56.577990 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-30 00:25:57.266088 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-30 00:25:57.466249 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-30 00:25:57.466349 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-30 00:25:57.466365 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-30 00:25:57.466377 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-30 00:25:57.466411 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-30 00:25:57.466426 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-30 00:25:57.466445 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-30 00:25:57.466462 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-30 00:25:57.466479 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-05-30 00:25:57.466525 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-30 00:25:57.466545 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-30 00:25:57.466557 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-30 00:25:57.466568 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-30 00:25:57.466579 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-30 00:25:57.466590 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-30 00:25:57.466601 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-30 00:25:57.466611 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-30 00:25:57.466622 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-30 00:25:57.473635 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-30 00:25:57.589901 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-30 00:25:57.589998 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-30 00:25:57.590012 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-30 00:25:57.590106 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-05-30 00:25:57.590119 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-05-30 00:25:57.597632 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-30 00:25:57.648424 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-30 00:25:57.648519 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-30 00:25:57.652011 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-30 00:25:59.220542 | orchestrator | 2025-05-30 00:25:59 | INFO  | Task e03b424b-2819-4fd6-b9db-666580361fd5 (resolvconf) was prepared for execution. 2025-05-30 00:25:59.220647 | orchestrator | 2025-05-30 00:25:59 | INFO  | It takes a moment until task e03b424b-2819-4fd6-b9db-666580361fd5 (resolvconf) has been started and output is visible here. 2025-05-30 00:26:02.201113 | orchestrator | 2025-05-30 00:26:02.201282 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-30 00:26:02.202451 | orchestrator | 2025-05-30 00:26:02.202494 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-30 00:26:02.202510 | orchestrator | Friday 30 May 2025 00:26:02 +0000 (0:00:00.084) 0:00:00.084 ************ 2025-05-30 00:26:06.278664 | orchestrator | ok: [testbed-manager] 2025-05-30 00:26:06.280777 | orchestrator | 2025-05-30 00:26:06.282273 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-30 00:26:06.283586 | orchestrator | Friday 30 May 2025 00:26:06 +0000 (0:00:04.079) 0:00:04.163 ************ 2025-05-30 00:26:06.327578 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:26:06.327673 | orchestrator | 2025-05-30 00:26:06.327709 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-30 00:26:06.327843 | orchestrator | Friday 30 May 2025 00:26:06 +0000 (0:00:00.050) 0:00:04.214 ************ 2025-05-30 00:26:06.422463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-30 00:26:06.422665 | orchestrator | 2025-05-30 00:26:06.422698 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-30 00:26:06.423112 | orchestrator | Friday 30 May 2025 00:26:06 +0000 (0:00:00.095) 0:00:04.309 ************ 2025-05-30 00:26:06.508200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-30 00:26:06.508526 | orchestrator | 2025-05-30 00:26:06.508565 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-30 00:26:06.508578 | orchestrator | Friday 30 May 2025 00:26:06 +0000 (0:00:00.085) 0:00:04.395 ************ 2025-05-30 00:26:07.570579 | orchestrator | ok: [testbed-manager] 2025-05-30 00:26:07.570968 | orchestrator | 2025-05-30 00:26:07.571557 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-30 00:26:07.572213 | orchestrator | Friday 30 May 2025 00:26:07 +0000 (0:00:01.061) 0:00:05.456 ************ 2025-05-30 00:26:07.641144 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:26:07.641275 | orchestrator | 2025-05-30 00:26:07.641769 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-30 00:26:07.642304 | orchestrator | Friday 30 May 2025 00:26:07 +0000 (0:00:00.069) 0:00:05.525 ************ 2025-05-30 00:26:08.142623 | orchestrator | ok: [testbed-manager] 2025-05-30 00:26:08.143720 | orchestrator | 2025-05-30 00:26:08.143812 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-30 00:26:08.144544 | orchestrator | Friday 30 May 2025 00:26:08 +0000 (0:00:00.503) 0:00:06.029 ************ 2025-05-30 00:26:08.220400 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:26:08.220502 | orchestrator | 2025-05-30 00:26:08.221950 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-30 00:26:08.223309 | orchestrator | Friday 30 May 2025 00:26:08 +0000 (0:00:00.075) 0:00:06.105 ************ 2025-05-30 00:26:08.750593 | orchestrator | changed: [testbed-manager] 2025-05-30 00:26:08.750693 | orchestrator | 2025-05-30 00:26:08.751493 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-30 00:26:08.752016 | orchestrator | Friday 30 May 2025 00:26:08 +0000 (0:00:00.530) 0:00:06.635 ************ 2025-05-30 00:26:09.832103 | orchestrator | changed: [testbed-manager] 2025-05-30 00:26:09.833630 | orchestrator | 2025-05-30 00:26:09.833859 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-30 00:26:09.834993 | orchestrator | Friday 30 May 2025 00:26:09 +0000 (0:00:01.081) 0:00:07.717 ************ 2025-05-30 00:26:10.793453 | orchestrator | ok: [testbed-manager] 2025-05-30 00:26:10.793850 | orchestrator | 2025-05-30 00:26:10.794455 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-30 00:26:10.795042 | orchestrator | Friday 30 May 2025 00:26:10 +0000 (0:00:00.961) 0:00:08.679 ************ 2025-05-30 00:26:10.872594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-30 00:26:10.872836 | orchestrator | 2025-05-30 00:26:10.872855 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-30 00:26:10.872893 | orchestrator | Friday 30 May 2025 00:26:10 +0000 (0:00:00.078) 0:00:08.757 ************ 2025-05-30 00:26:12.033767 | orchestrator | changed: [testbed-manager] 2025-05-30 00:26:12.034622 | orchestrator | 2025-05-30 00:26:12.035533 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:26:12.035823 | orchestrator | 2025-05-30 00:26:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:26:12.035998 | orchestrator | 2025-05-30 00:26:12 | INFO  | Please wait and do not abort execution. 2025-05-30 00:26:12.036981 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:26:12.037427 | orchestrator | 2025-05-30 00:26:12.037850 | orchestrator | Friday 30 May 2025 00:26:12 +0000 (0:00:01.158) 0:00:09.916 ************ 2025-05-30 00:26:12.038268 | orchestrator | =============================================================================== 2025-05-30 00:26:12.038795 | orchestrator | Gathering Facts --------------------------------------------------------- 4.08s 2025-05-30 00:26:12.039256 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2025-05-30 00:26:12.040066 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2025-05-30 00:26:12.042011 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.06s 2025-05-30 00:26:12.042497 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.96s 2025-05-30 00:26:12.043310 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2025-05-30 00:26:12.043626 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2025-05-30 00:26:12.044475 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2025-05-30 00:26:12.044741 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-05-30 00:26:12.045391 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-05-30 00:26:12.045688 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-30 00:26:12.046273 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-05-30 00:26:12.046721 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-05-30 00:26:12.395186 | orchestrator | + osism apply sshconfig 2025-05-30 00:26:13.765362 | orchestrator | 2025-05-30 00:26:13 | INFO  | Task e57c39e9-0d75-43b5-a752-73ab8f4a39a5 (sshconfig) was prepared for execution. 2025-05-30 00:26:13.765453 | orchestrator | 2025-05-30 00:26:13 | INFO  | It takes a moment until task e57c39e9-0d75-43b5-a752-73ab8f4a39a5 (sshconfig) has been started and output is visible here. 2025-05-30 00:26:16.754945 | orchestrator | 2025-05-30 00:26:16.757504 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-30 00:26:16.757557 | orchestrator | 2025-05-30 00:26:16.757571 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-30 00:26:16.758534 | orchestrator | Friday 30 May 2025 00:26:16 +0000 (0:00:00.102) 0:00:00.102 ************ 2025-05-30 00:26:17.317930 | orchestrator | ok: [testbed-manager] 2025-05-30 00:26:17.318396 | orchestrator | 2025-05-30 00:26:17.320015 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-30 00:26:17.320943 | orchestrator | Friday 30 May 2025 00:26:17 +0000 (0:00:00.563) 0:00:00.666 ************ 2025-05-30 00:26:17.812865 | orchestrator | changed: [testbed-manager] 2025-05-30 00:26:17.812972 | orchestrator | 2025-05-30 00:26:17.813314 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-30 00:26:17.813878 | orchestrator | Friday 30 May 2025 00:26:17 +0000 (0:00:00.495) 0:00:01.162 ************ 2025-05-30 00:26:23.362840 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-30 00:26:23.362956 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-30 00:26:23.362972 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-30 00:26:23.363774 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-30 00:26:23.363803 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-30 00:26:23.363815 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-30 00:26:23.364593 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-30 00:26:23.367473 | orchestrator | 2025-05-30 00:26:23.368307 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-30 00:26:23.368904 | orchestrator | Friday 30 May 2025 00:26:23 +0000 (0:00:05.548) 0:00:06.710 ************ 2025-05-30 00:26:23.421978 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:26:23.422123 | orchestrator | 2025-05-30 00:26:23.422138 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-30 00:26:23.422444 | orchestrator | Friday 30 May 2025 00:26:23 +0000 (0:00:00.058) 0:00:06.769 ************ 2025-05-30 00:26:24.003996 | orchestrator | changed: [testbed-manager] 2025-05-30 00:26:24.004102 | orchestrator | 2025-05-30 00:26:24.004117 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:26:24.004320 | orchestrator | 2025-05-30 00:26:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:26:24.004345 | orchestrator | 2025-05-30 00:26:24 | INFO  | Please wait and do not abort execution. 2025-05-30 00:26:24.005705 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:26:24.006312 | orchestrator | 2025-05-30 00:26:24.006928 | orchestrator | Friday 30 May 2025 00:26:23 +0000 (0:00:00.583) 0:00:07.353 ************ 2025-05-30 00:26:24.007343 | orchestrator | =============================================================================== 2025-05-30 00:26:24.007623 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.55s 2025-05-30 00:26:24.008395 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-05-30 00:26:24.008499 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-05-30 00:26:24.008874 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.50s 2025-05-30 00:26:24.009266 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-05-30 00:26:24.403526 | orchestrator | + osism apply known-hosts 2025-05-30 00:26:25.817248 | orchestrator | 2025-05-30 00:26:25 | INFO  | Task b5c9080f-28a2-495a-b7f7-4bf4977935e1 (known-hosts) was prepared for execution. 2025-05-30 00:26:25.817350 | orchestrator | 2025-05-30 00:26:25 | INFO  | It takes a moment until task b5c9080f-28a2-495a-b7f7-4bf4977935e1 (known-hosts) has been started and output is visible here. 2025-05-30 00:26:28.737393 | orchestrator | 2025-05-30 00:26:28.739019 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-30 00:26:28.739477 | orchestrator | 2025-05-30 00:26:28.739830 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-30 00:26:28.740962 | orchestrator | Friday 30 May 2025 00:26:28 +0000 (0:00:00.107) 0:00:00.107 ************ 2025-05-30 00:26:34.786507 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-30 00:26:34.786740 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-30 00:26:34.787943 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-30 00:26:34.788280 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-30 00:26:34.789293 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-30 00:26:34.790114 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-30 00:26:34.791558 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-30 00:26:34.792031 | orchestrator | 2025-05-30 00:26:34.792685 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-30 00:26:34.793461 | orchestrator | Friday 30 May 2025 00:26:34 +0000 (0:00:06.051) 0:00:06.158 ************ 2025-05-30 00:26:34.953438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-30 00:26:34.956067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-30 00:26:34.956109 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-30 00:26:34.956324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-30 00:26:34.957309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-30 00:26:34.957971 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-30 00:26:34.958358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-30 00:26:34.958938 | orchestrator | 2025-05-30 00:26:34.959448 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:34.960249 | orchestrator | Friday 30 May 2025 00:26:34 +0000 (0:00:00.167) 0:00:06.325 ************ 2025-05-30 00:26:36.093520 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKRQbd/X7CmHfa9onBUkvlDXAnmxS6OnmAAg5faPJqGC) 2025-05-30 00:26:36.093955 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvSY6eLotWOEk8fRQH/UXmRcM2JoE25lt6FPBOkChpitsFYkfE+F230qPfjM1jWIxHBVIheqw3S74BbfEBefqJivYAdEvwy8OYb0Nv115uoO09SGdxXr4xRsO4CvilEXXr2s2po9EgkgBKTZMRxncIcRtyGeo/8vXCuDVirXigiAzfNJ7DrAIdHsebVpHJOAu0JB0uMmpfWmLZmFWsACFxR/XtI45wED43J5PdKjd3RrCHHgFGUU+dEQW+WUIW2fKZCUJHaDp1GoxLoVVBO7W3KyZdBiD4dY7rRwj7Yj9j6xlRty4Hv+a6VbUpX2AV7BV7/soxi5cotQac8l57X8oAri93+eIcM24jLmr9FTL1pYVHIyh8L2Tcwes0zgttkjxmwCNNL0v8cjuiuaSAr4UeYz8N85IJxALFMEkVhp7eWsCVW4Ut/9HqWOYD23dNzgByAW21L5JjT1JBsS6qX/PoCgwPFGqU9xhLTGTruJ2KhP9nUhUdwB5Na4bauDbjKws=) 2025-05-30 00:26:36.093996 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN7bwTs2XPqkD1d1G/M8g6DK1u3MRTWaPWo1Qqw4gTXRt/sp9CUwV1pIAOcmW1C7NPWGX5o9RtwZ70rHYW62AI8=) 2025-05-30 00:26:36.094906 | orchestrator | 2025-05-30 00:26:36.095610 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:36.096384 | orchestrator | Friday 30 May 2025 00:26:36 +0000 (0:00:01.139) 0:00:07.465 ************ 2025-05-30 00:26:37.156901 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNLNjVOdMDS69pg+KubFomjTPwFxUGTVC4kskMifkIZyGo2sOfJxHY6bz/sn9s9i+rZgBIUshOJSz63e1YImMiBNXydEzHFLRtNHIgylvyaKf+rWsV/qGTx5N9W5AfAXh2wgt5TyEdMn02bCEldIDBJerYl8kYm4GTnhrdBocWmVgkJv45KwuzkpTU3JnrkXLQns0IPiEt4HQeU3XfNFOCIoaOO4U6ajIdryFwLCBW4kMkOyWiH97kBv0/VaO/CHTEMzDbWFSGTxmXHN/3AjqKTRlXxqGRTzUUO/g+9l6YvmvgO2jY750imsPVX6h+FEtEKt+V979GuM69pYTtPmii7i4403l+Qy0deoIbxr9y/bEvWj+a7/iA4yqPvJ4BAIHt0aWXYx1nXpSjfhAuWhchbRLh0nUeBVN8ujnI5c0WTwOktsgX+QHFuQncQ1MA7o4sby7PNEz73SdWze3QyBFTAQqmzBWE28OHH3NkjICf5nse+1APvEkujl5WdUC2KWE=) 2025-05-30 00:26:37.157433 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMv5ifxt/G7X0eIvwdaKvGb7uHs802D3mxN44mjZcSgU) 2025-05-30 00:26:37.158401 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFYMoaW6ibFf5WmsNmRoni8gn7T7HXHRa5d9chQIJGkQUXInpbPk08cC/I6e1R2jHF8U5fVN4UE72MXFF8p2yoA=) 2025-05-30 00:26:37.158518 | orchestrator | 2025-05-30 00:26:37.158953 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:37.159466 | orchestrator | Friday 30 May 2025 00:26:37 +0000 (0:00:01.062) 0:00:08.527 ************ 2025-05-30 00:26:38.187299 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMLLPWm0lyDUtIbg+AhGg8ax5X/2j3FpoDm0ay2hmOmXliq4XAU+Z1Eh89I2cfI4YzxsI+gFcpuaQPzvdverrAhgqNPOMKdtZrbxyTmop1cRNGqp123Usb09dexnQgCuw5G5FgA9BddoVBDQY29XzNyP2btbeT8tvHWDWiPRIR1j3EVpWRJqU1VVP8iGPyU26vrQtD/PG48JM7EOcn4hbmEG6jCl1/Y4quzZBhAz/EXXW0B1D1FRTf+uC3ziYOT7aql0hVeOUcsL8Pj4SSrgW3BpS+vKGIJg8ExblznaQUTy+GbE5PUe4CDWmugKS4ygFvTFWukpkS6g8uVcvrcynUWAvXxoHuwWn7NVf4N2IGXv3vve6HAaEoR0+V8zce+bEXVCgCumOSQcwmz+u/qbl3sRu7jNvVnkiqBd9mpePsYjYCYmIt5qJAAfwOZhc3Qklb+te7lV0pDaJw6NkPY8EVqZf+90KwqCyVDMW86ljVbFguzU+D0gcYHzqK+/Y/HxM=) 2025-05-30 00:26:38.189395 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNVc3zXzRGYJoTWlmnvfjItyOUcV2bd8bwfPTN9yRFHwezzK56RCO5HCZvZv8tzH07MZmd6in9DvN3AYejjj0mg=) 2025-05-30 00:26:38.191709 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIARIvCV+3y4Fwgz+lLfSd+ev31CHfVqExxL2FuTHItD5) 2025-05-30 00:26:38.192012 | orchestrator | 2025-05-30 00:26:38.192459 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:38.192981 | orchestrator | Friday 30 May 2025 00:26:38 +0000 (0:00:01.028) 0:00:09.556 ************ 2025-05-30 00:26:39.241759 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr3nFkGCk8+0rCHg7Z+DCiihGANNBYfqJPaL6rdu63gb1UbcmXHbCpIVMlhUiJlCngyQMN9KXPkmjm19xpIwcio0A/tAa78Kfu5MonmdeNQcDcxQncx99sJHp4hDM1tX0XlZFeejZ2iIzfptT46ovFHZCReIQFcancM/Pi9W7gKA7lqvJyWDXaxKzrIlUzoJJtgKL3vEMMyifpo2An4sSD2H4YP6tQjLCplyDiVtCcu0smCpq5Ona34hvJph1fd92RUKIXYrzYKEShhJ2DNtipXQPyQ/K0xkCgNsFJXCFjQ4x9TxhLJBPk62DkYP49jJSadQnqAwl/EWNeMWCYhjODYlHMuKr+KDbaSPjcnRpg6afJmIYx3Y4s59BboX5bS+isxVopqv4OMzYEuls7HuuxCDuJUT57bCY/GKi64aPk5qZBaaCTREZW8RlzOXClFrju2e2M7NJuI6oqR6OHkFG0TbSsIi3ReNpGoObpAq+GsLwkH3o4PRLkkF/g84sz8n8=) 2025-05-30 00:26:39.244351 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLLoG5FlXpY+KTi+YtCSLCEX12jxlQNZEh9DsJe9+yQVY7QTg714DqMQurMWrZRE5xaFosnRNFY6OqWSm6Px+og=) 2025-05-30 00:26:39.245052 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA8WYjMgasa/qSQcWISx7wIyCB1HvISdp+f61OHE9P1K) 2025-05-30 00:26:39.245582 | orchestrator | 2025-05-30 00:26:39.246425 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:39.247020 | orchestrator | Friday 30 May 2025 00:26:39 +0000 (0:00:01.056) 0:00:10.612 ************ 2025-05-30 00:26:40.300828 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClCE4Mq+dXEwLolC1lGS8XiGnSH75/ZZTVzEk0dj+59LhelzUkVseo4SQpGf4bbzwPYCoJ+1MjxKaZI1D+k/J8nYiI0IuQiZIQa66g4SulA2OudA8n1AKRv31p13lcbVkNLVS5D+2fItba/j7dkZLU/UI3ALomB+Jxxr7F8FTvZgqzgwNgvllMrnJn6Vfb9Zhd7uTJPK+ZMWZaDRiZPdow+saTMME+CsG0HMuSMpWD/T5Z/Iy35qjCYt+yVXGftkJar14kfY3Z2EVD0sX6ffzr9jqPZjltRaI3EzXoaV4N+U7G+LspPpp5DSIr6fSMxcPQsqHNj2b1H7ImRJCRdHh9OYGovPDdIYhnL+FCcUzjKjaKgTXiVWN4MPUqjQHR3Hj8R3BtYkDUiqyV597uEAWViqLoZ3i0QwJS7ztX3Vzuse4pylWhU3gIX5WEowcsnS4YBUlUpaFX9jgZTn5Cyo0nNmZ/nwuQRFAMz6zf20+rFZB8xkXj4KZFClD8gepWC5k=) 2025-05-30 00:26:40.300940 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCKSUMPIrIPRt+y/CbfitMHwKVayTZ40ix31EL4gUThrQE30CQAczYagTuhYiKKhQB2BBkpci/5+jI8KP02fIY8=) 2025-05-30 00:26:40.300982 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMbd8/3CkMtVMk5o2s5q4jBUL+XTu/vTqCOIAWL4TXCg) 2025-05-30 00:26:40.301673 | orchestrator | 2025-05-30 00:26:40.302273 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:40.302910 | orchestrator | Friday 30 May 2025 00:26:40 +0000 (0:00:01.060) 0:00:11.672 ************ 2025-05-30 00:26:41.348838 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBKpxX8tSTCa8/G3uByvBpNWeJU4mk08dVZ2W7a+Hhj/) 2025-05-30 00:26:41.349191 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKxmtWFFPk/6tZNf9cpTcqkoDhJHH2CA886kuW0m+iqjAwSXeMEY/j/w4MIdrR/VzxsmRnmwIVkCCS1/GDR9QMM1fJCeZwlnimq9nDIHj7yfiv6Vzd52Xex6X/Q3/hXN5pD0YiX2N18wZsOHYTsD8DZ9GBR3olMTIa/RgTOsMJlZrN6Eo6Mhqgo7XBhqJg/8bmFTu1cSIaXsPZBChRZ2Lo1Uy72c2jTLP6FgzIzMebQ9nEUgmYGbBXskO6+/g5rLeIV6OqW/6YDzkoPZsWPEGYjDAnIIfJo26WY83mYGEHP1AmCBxofomadqXltY3zhv6FJ10YPj8bfGnTqjwuZ9yPSbpOs/v8NVrBP0WmptSczSXofV5udrznubE1FjostEvKKd+MFZHUO0J/SxDx9A1vGFGzXWLuzOFnRR9tZh4lnQxC/0oKa9q1uCnv3WPuzldPXaeqKTJ6cxPZHktzx9oaQVtj+Nv3rZzAYCukiVSJKwni7u2jR7wQRiezZeFy1wU=) 2025-05-30 00:26:41.350419 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEgS2e3daugSBCGrwX1CurTFoOfWC0XFTxGPVNSwtc1A1Na1nj2A/sjYwDWOQH+MFGAXAX0aHufBrCMteJg5DbQ=) 2025-05-30 00:26:41.351283 | orchestrator | 2025-05-30 00:26:41.351774 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:41.352490 | orchestrator | Friday 30 May 2025 00:26:41 +0000 (0:00:01.046) 0:00:12.719 ************ 2025-05-30 00:26:42.376409 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMx2LnrWuKZdq2IkQ8UBsRbLtV0kvnmqYibmAqOIydLl2zs2wor2st8WD7FPwdv9OKV6k9kMU4MKiLv6zj7k3mw=) 2025-05-30 00:26:42.377075 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILUxHLV3QmKil9zyaI4H29axBBM+MPM7+YzMPLACTTLX) 2025-05-30 00:26:42.377456 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCcZVet5lO0ckVmUTY7vrH30TXtO+F16AhKWVo7apWpgfmje3Wbqw9occpcu0Lg9HDbcHgj7wf8fM6Vlt1ce02Jg56xpTBJBQYn77w2Ie+I1dGPnu/dcDmfM4AsAtsQvI8p+QmJ3nB3tjMMTTC6avGzOjFkxd292A0a/I6IHBFRA3NPbo0eMqW5ryFBUISsDV3wkqQnTAdiU9mxEtMkvya4nDvcoNK9X2ou/GXSs73UUuRYw1vNX6IkdtVlZIM+RwxGmjupzSt3F8ORTKyuuy9JCS2kGjMItCB5QpE9MBIllLcIc/oqYfvTOEsAs5JKirrw42MtFQcqyAHEw3w7QkgigXTPFxDxytd4sfSTtdp8LiHZ4FHM3mWe/ihkPEEJT1DznatYBGtbo0cRSelcdGvbj5/xpUsro0zkRR+ZbXaYgcATTneaWq5Byvh7YMUhXxoCWbYkNdU15+OF/UoiIc8/ynb+gDJQDeutU0wG17IA21qUeaGrHJnXdUgcoSN3l8=) 2025-05-30 00:26:42.378536 | orchestrator | 2025-05-30 00:26:42.379431 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-30 00:26:42.379740 | orchestrator | Friday 30 May 2025 00:26:42 +0000 (0:00:01.028) 0:00:13.747 ************ 2025-05-30 00:26:47.675435 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-30 00:26:47.675537 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-30 00:26:47.676104 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-30 00:26:47.677092 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-30 00:26:47.678625 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-30 00:26:47.679320 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-30 00:26:47.680122 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-30 00:26:47.681655 | orchestrator | 2025-05-30 00:26:47.687129 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-30 00:26:47.687205 | orchestrator | Friday 30 May 2025 00:26:47 +0000 (0:00:05.298) 0:00:19.046 ************ 2025-05-30 00:26:47.856329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-30 00:26:47.857783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-30 00:26:47.859192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-30 00:26:47.860483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-30 00:26:47.861081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-30 00:26:47.861791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-30 00:26:47.862306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-30 00:26:47.862942 | orchestrator | 2025-05-30 00:26:47.863461 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:47.863874 | orchestrator | Friday 30 May 2025 00:26:47 +0000 (0:00:00.183) 0:00:19.229 ************ 2025-05-30 00:26:48.919440 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKRQbd/X7CmHfa9onBUkvlDXAnmxS6OnmAAg5faPJqGC) 2025-05-30 00:26:48.919603 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvSY6eLotWOEk8fRQH/UXmRcM2JoE25lt6FPBOkChpitsFYkfE+F230qPfjM1jWIxHBVIheqw3S74BbfEBefqJivYAdEvwy8OYb0Nv115uoO09SGdxXr4xRsO4CvilEXXr2s2po9EgkgBKTZMRxncIcRtyGeo/8vXCuDVirXigiAzfNJ7DrAIdHsebVpHJOAu0JB0uMmpfWmLZmFWsACFxR/XtI45wED43J5PdKjd3RrCHHgFGUU+dEQW+WUIW2fKZCUJHaDp1GoxLoVVBO7W3KyZdBiD4dY7rRwj7Yj9j6xlRty4Hv+a6VbUpX2AV7BV7/soxi5cotQac8l57X8oAri93+eIcM24jLmr9FTL1pYVHIyh8L2Tcwes0zgttkjxmwCNNL0v8cjuiuaSAr4UeYz8N85IJxALFMEkVhp7eWsCVW4Ut/9HqWOYD23dNzgByAW21L5JjT1JBsS6qX/PoCgwPFGqU9xhLTGTruJ2KhP9nUhUdwB5Na4bauDbjKws=) 2025-05-30 00:26:48.920375 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN7bwTs2XPqkD1d1G/M8g6DK1u3MRTWaPWo1Qqw4gTXRt/sp9CUwV1pIAOcmW1C7NPWGX5o9RtwZ70rHYW62AI8=) 2025-05-30 00:26:48.920793 | orchestrator | 2025-05-30 00:26:48.921387 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:48.922187 | orchestrator | Friday 30 May 2025 00:26:48 +0000 (0:00:01.060) 0:00:20.289 ************ 2025-05-30 00:26:49.954210 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNLNjVOdMDS69pg+KubFomjTPwFxUGTVC4kskMifkIZyGo2sOfJxHY6bz/sn9s9i+rZgBIUshOJSz63e1YImMiBNXydEzHFLRtNHIgylvyaKf+rWsV/qGTx5N9W5AfAXh2wgt5TyEdMn02bCEldIDBJerYl8kYm4GTnhrdBocWmVgkJv45KwuzkpTU3JnrkXLQns0IPiEt4HQeU3XfNFOCIoaOO4U6ajIdryFwLCBW4kMkOyWiH97kBv0/VaO/CHTEMzDbWFSGTxmXHN/3AjqKTRlXxqGRTzUUO/g+9l6YvmvgO2jY750imsPVX6h+FEtEKt+V979GuM69pYTtPmii7i4403l+Qy0deoIbxr9y/bEvWj+a7/iA4yqPvJ4BAIHt0aWXYx1nXpSjfhAuWhchbRLh0nUeBVN8ujnI5c0WTwOktsgX+QHFuQncQ1MA7o4sby7PNEz73SdWze3QyBFTAQqmzBWE28OHH3NkjICf5nse+1APvEkujl5WdUC2KWE=) 2025-05-30 00:26:49.954264 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFYMoaW6ibFf5WmsNmRoni8gn7T7HXHRa5d9chQIJGkQUXInpbPk08cC/I6e1R2jHF8U5fVN4UE72MXFF8p2yoA=) 2025-05-30 00:26:49.954273 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMv5ifxt/G7X0eIvwdaKvGb7uHs802D3mxN44mjZcSgU) 2025-05-30 00:26:49.954292 | orchestrator | 2025-05-30 00:26:49.954299 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:49.954353 | orchestrator | Friday 30 May 2025 00:26:49 +0000 (0:00:01.032) 0:00:21.322 ************ 2025-05-30 00:26:50.972275 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNVc3zXzRGYJoTWlmnvfjItyOUcV2bd8bwfPTN9yRFHwezzK56RCO5HCZvZv8tzH07MZmd6in9DvN3AYejjj0mg=) 2025-05-30 00:26:50.972539 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMLLPWm0lyDUtIbg+AhGg8ax5X/2j3FpoDm0ay2hmOmXliq4XAU+Z1Eh89I2cfI4YzxsI+gFcpuaQPzvdverrAhgqNPOMKdtZrbxyTmop1cRNGqp123Usb09dexnQgCuw5G5FgA9BddoVBDQY29XzNyP2btbeT8tvHWDWiPRIR1j3EVpWRJqU1VVP8iGPyU26vrQtD/PG48JM7EOcn4hbmEG6jCl1/Y4quzZBhAz/EXXW0B1D1FRTf+uC3ziYOT7aql0hVeOUcsL8Pj4SSrgW3BpS+vKGIJg8ExblznaQUTy+GbE5PUe4CDWmugKS4ygFvTFWukpkS6g8uVcvrcynUWAvXxoHuwWn7NVf4N2IGXv3vve6HAaEoR0+V8zce+bEXVCgCumOSQcwmz+u/qbl3sRu7jNvVnkiqBd9mpePsYjYCYmIt5qJAAfwOZhc3Qklb+te7lV0pDaJw6NkPY8EVqZf+90KwqCyVDMW86ljVbFguzU+D0gcYHzqK+/Y/HxM=) 2025-05-30 00:26:50.973441 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIARIvCV+3y4Fwgz+lLfSd+ev31CHfVqExxL2FuTHItD5) 2025-05-30 00:26:50.973989 | orchestrator | 2025-05-30 00:26:50.974582 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:50.975503 | orchestrator | Friday 30 May 2025 00:26:50 +0000 (0:00:01.021) 0:00:22.343 ************ 2025-05-30 00:26:51.994380 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA8WYjMgasa/qSQcWISx7wIyCB1HvISdp+f61OHE9P1K) 2025-05-30 00:26:51.994468 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr3nFkGCk8+0rCHg7Z+DCiihGANNBYfqJPaL6rdu63gb1UbcmXHbCpIVMlhUiJlCngyQMN9KXPkmjm19xpIwcio0A/tAa78Kfu5MonmdeNQcDcxQncx99sJHp4hDM1tX0XlZFeejZ2iIzfptT46ovFHZCReIQFcancM/Pi9W7gKA7lqvJyWDXaxKzrIlUzoJJtgKL3vEMMyifpo2An4sSD2H4YP6tQjLCplyDiVtCcu0smCpq5Ona34hvJph1fd92RUKIXYrzYKEShhJ2DNtipXQPyQ/K0xkCgNsFJXCFjQ4x9TxhLJBPk62DkYP49jJSadQnqAwl/EWNeMWCYhjODYlHMuKr+KDbaSPjcnRpg6afJmIYx3Y4s59BboX5bS+isxVopqv4OMzYEuls7HuuxCDuJUT57bCY/GKi64aPk5qZBaaCTREZW8RlzOXClFrju2e2M7NJuI6oqR6OHkFG0TbSsIi3ReNpGoObpAq+GsLwkH3o4PRLkkF/g84sz8n8=) 2025-05-30 00:26:51.994938 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLLoG5FlXpY+KTi+YtCSLCEX12jxlQNZEh9DsJe9+yQVY7QTg714DqMQurMWrZRE5xaFosnRNFY6OqWSm6Px+og=) 2025-05-30 00:26:51.995473 | orchestrator | 2025-05-30 00:26:51.995806 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:51.996303 | orchestrator | Friday 30 May 2025 00:26:51 +0000 (0:00:01.022) 0:00:23.365 ************ 2025-05-30 00:26:53.072733 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCKSUMPIrIPRt+y/CbfitMHwKVayTZ40ix31EL4gUThrQE30CQAczYagTuhYiKKhQB2BBkpci/5+jI8KP02fIY8=) 2025-05-30 00:26:53.072849 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClCE4Mq+dXEwLolC1lGS8XiGnSH75/ZZTVzEk0dj+59LhelzUkVseo4SQpGf4bbzwPYCoJ+1MjxKaZI1D+k/J8nYiI0IuQiZIQa66g4SulA2OudA8n1AKRv31p13lcbVkNLVS5D+2fItba/j7dkZLU/UI3ALomB+Jxxr7F8FTvZgqzgwNgvllMrnJn6Vfb9Zhd7uTJPK+ZMWZaDRiZPdow+saTMME+CsG0HMuSMpWD/T5Z/Iy35qjCYt+yVXGftkJar14kfY3Z2EVD0sX6ffzr9jqPZjltRaI3EzXoaV4N+U7G+LspPpp5DSIr6fSMxcPQsqHNj2b1H7ImRJCRdHh9OYGovPDdIYhnL+FCcUzjKjaKgTXiVWN4MPUqjQHR3Hj8R3BtYkDUiqyV597uEAWViqLoZ3i0QwJS7ztX3Vzuse4pylWhU3gIX5WEowcsnS4YBUlUpaFX9jgZTn5Cyo0nNmZ/nwuQRFAMz6zf20+rFZB8xkXj4KZFClD8gepWC5k=) 2025-05-30 00:26:53.072869 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMbd8/3CkMtVMk5o2s5q4jBUL+XTu/vTqCOIAWL4TXCg) 2025-05-30 00:26:53.073089 | orchestrator | 2025-05-30 00:26:53.074280 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:53.074388 | orchestrator | Friday 30 May 2025 00:26:53 +0000 (0:00:01.078) 0:00:24.444 ************ 2025-05-30 00:26:54.103269 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBKpxX8tSTCa8/G3uByvBpNWeJU4mk08dVZ2W7a+Hhj/) 2025-05-30 00:26:54.104501 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKxmtWFFPk/6tZNf9cpTcqkoDhJHH2CA886kuW0m+iqjAwSXeMEY/j/w4MIdrR/VzxsmRnmwIVkCCS1/GDR9QMM1fJCeZwlnimq9nDIHj7yfiv6Vzd52Xex6X/Q3/hXN5pD0YiX2N18wZsOHYTsD8DZ9GBR3olMTIa/RgTOsMJlZrN6Eo6Mhqgo7XBhqJg/8bmFTu1cSIaXsPZBChRZ2Lo1Uy72c2jTLP6FgzIzMebQ9nEUgmYGbBXskO6+/g5rLeIV6OqW/6YDzkoPZsWPEGYjDAnIIfJo26WY83mYGEHP1AmCBxofomadqXltY3zhv6FJ10YPj8bfGnTqjwuZ9yPSbpOs/v8NVrBP0WmptSczSXofV5udrznubE1FjostEvKKd+MFZHUO0J/SxDx9A1vGFGzXWLuzOFnRR9tZh4lnQxC/0oKa9q1uCnv3WPuzldPXaeqKTJ6cxPZHktzx9oaQVtj+Nv3rZzAYCukiVSJKwni7u2jR7wQRiezZeFy1wU=) 2025-05-30 00:26:54.104547 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEgS2e3daugSBCGrwX1CurTFoOfWC0XFTxGPVNSwtc1A1Na1nj2A/sjYwDWOQH+MFGAXAX0aHufBrCMteJg5DbQ=) 2025-05-30 00:26:54.105128 | orchestrator | 2025-05-30 00:26:54.105667 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-30 00:26:54.106294 | orchestrator | Friday 30 May 2025 00:26:54 +0000 (0:00:01.030) 0:00:25.474 ************ 2025-05-30 00:26:55.115789 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCcZVet5lO0ckVmUTY7vrH30TXtO+F16AhKWVo7apWpgfmje3Wbqw9occpcu0Lg9HDbcHgj7wf8fM6Vlt1ce02Jg56xpTBJBQYn77w2Ie+I1dGPnu/dcDmfM4AsAtsQvI8p+QmJ3nB3tjMMTTC6avGzOjFkxd292A0a/I6IHBFRA3NPbo0eMqW5ryFBUISsDV3wkqQnTAdiU9mxEtMkvya4nDvcoNK9X2ou/GXSs73UUuRYw1vNX6IkdtVlZIM+RwxGmjupzSt3F8ORTKyuuy9JCS2kGjMItCB5QpE9MBIllLcIc/oqYfvTOEsAs5JKirrw42MtFQcqyAHEw3w7QkgigXTPFxDxytd4sfSTtdp8LiHZ4FHM3mWe/ihkPEEJT1DznatYBGtbo0cRSelcdGvbj5/xpUsro0zkRR+ZbXaYgcATTneaWq5Byvh7YMUhXxoCWbYkNdU15+OF/UoiIc8/ynb+gDJQDeutU0wG17IA21qUeaGrHJnXdUgcoSN3l8=) 2025-05-30 00:26:55.116674 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMx2LnrWuKZdq2IkQ8UBsRbLtV0kvnmqYibmAqOIydLl2zs2wor2st8WD7FPwdv9OKV6k9kMU4MKiLv6zj7k3mw=) 2025-05-30 00:26:55.117209 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILUxHLV3QmKil9zyaI4H29axBBM+MPM7+YzMPLACTTLX) 2025-05-30 00:26:55.118260 | orchestrator | 2025-05-30 00:26:55.118599 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-30 00:26:55.119093 | orchestrator | Friday 30 May 2025 00:26:55 +0000 (0:00:01.011) 0:00:26.486 ************ 2025-05-30 00:26:55.286290 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-30 00:26:55.287507 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-30 00:26:55.287953 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-30 00:26:55.288914 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-30 00:26:55.289971 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-30 00:26:55.290355 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-30 00:26:55.290884 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-30 00:26:55.291430 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:26:55.291811 | orchestrator | 2025-05-30 00:26:55.292538 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-30 00:26:55.293156 | orchestrator | Friday 30 May 2025 00:26:55 +0000 (0:00:00.171) 0:00:26.658 ************ 2025-05-30 00:26:55.338404 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:26:55.338555 | orchestrator | 2025-05-30 00:26:55.339570 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-30 00:26:55.339900 | orchestrator | Friday 30 May 2025 00:26:55 +0000 (0:00:00.053) 0:00:26.711 ************ 2025-05-30 00:26:55.389732 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:26:55.390494 | orchestrator | 2025-05-30 00:26:55.391652 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-30 00:26:55.392549 | orchestrator | Friday 30 May 2025 00:26:55 +0000 (0:00:00.051) 0:00:26.762 ************ 2025-05-30 00:26:56.023879 | orchestrator | changed: [testbed-manager] 2025-05-30 00:26:56.025405 | orchestrator | 2025-05-30 00:26:56.025445 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:26:56.025460 | orchestrator | 2025-05-30 00:26:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:26:56.025472 | orchestrator | 2025-05-30 00:26:56 | INFO  | Please wait and do not abort execution. 2025-05-30 00:26:56.026648 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:26:56.027402 | orchestrator | 2025-05-30 00:26:56.028023 | orchestrator | Friday 30 May 2025 00:26:56 +0000 (0:00:00.632) 0:00:27.395 ************ 2025-05-30 00:26:56.028753 | orchestrator | =============================================================================== 2025-05-30 00:26:56.029655 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.05s 2025-05-30 00:26:56.029799 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.30s 2025-05-30 00:26:56.030406 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-05-30 00:26:56.030748 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-30 00:26:56.031468 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-30 00:26:56.031751 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-30 00:26:56.032246 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-30 00:26:56.032910 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-30 00:26:56.033143 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-30 00:26:56.033797 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-30 00:26:56.034376 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-30 00:26:56.034985 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-30 00:26:56.035313 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-30 00:26:56.035782 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-30 00:26:56.036790 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-30 00:26:56.037283 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-30 00:26:56.037754 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.63s 2025-05-30 00:26:56.038116 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-05-30 00:26:56.038488 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-05-30 00:26:56.038760 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-05-30 00:26:56.440917 | orchestrator | + osism apply squid 2025-05-30 00:26:57.814692 | orchestrator | 2025-05-30 00:26:57 | INFO  | Task c3bb8f3a-bece-4848-8632-638e09cdd858 (squid) was prepared for execution. 2025-05-30 00:26:57.814798 | orchestrator | 2025-05-30 00:26:57 | INFO  | It takes a moment until task c3bb8f3a-bece-4848-8632-638e09cdd858 (squid) has been started and output is visible here. 2025-05-30 00:27:00.782616 | orchestrator | 2025-05-30 00:27:00.783279 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-30 00:27:00.784682 | orchestrator | 2025-05-30 00:27:00.785629 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-30 00:27:00.787244 | orchestrator | Friday 30 May 2025 00:27:00 +0000 (0:00:00.105) 0:00:00.105 ************ 2025-05-30 00:27:00.869466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-30 00:27:00.870406 | orchestrator | 2025-05-30 00:27:00.870774 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-30 00:27:00.871621 | orchestrator | Friday 30 May 2025 00:27:00 +0000 (0:00:00.090) 0:00:00.195 ************ 2025-05-30 00:27:02.268438 | orchestrator | ok: [testbed-manager] 2025-05-30 00:27:02.268972 | orchestrator | 2025-05-30 00:27:02.269003 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-30 00:27:02.269304 | orchestrator | Friday 30 May 2025 00:27:02 +0000 (0:00:01.397) 0:00:01.593 ************ 2025-05-30 00:27:03.390815 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-30 00:27:03.390921 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-30 00:27:03.391406 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-30 00:27:03.392789 | orchestrator | 2025-05-30 00:27:03.393576 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-30 00:27:03.394693 | orchestrator | Friday 30 May 2025 00:27:03 +0000 (0:00:01.121) 0:00:02.714 ************ 2025-05-30 00:27:04.486964 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-30 00:27:04.487654 | orchestrator | 2025-05-30 00:27:04.488376 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-30 00:27:04.489886 | orchestrator | Friday 30 May 2025 00:27:04 +0000 (0:00:01.097) 0:00:03.811 ************ 2025-05-30 00:27:04.867142 | orchestrator | ok: [testbed-manager] 2025-05-30 00:27:04.867287 | orchestrator | 2025-05-30 00:27:04.868151 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-30 00:27:04.870355 | orchestrator | Friday 30 May 2025 00:27:04 +0000 (0:00:00.381) 0:00:04.193 ************ 2025-05-30 00:27:05.832169 | orchestrator | changed: [testbed-manager] 2025-05-30 00:27:05.832315 | orchestrator | 2025-05-30 00:27:05.832592 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-30 00:27:05.833435 | orchestrator | Friday 30 May 2025 00:27:05 +0000 (0:00:00.964) 0:00:05.157 ************ 2025-05-30 00:27:37.402984 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-30 00:27:37.403120 | orchestrator | ok: [testbed-manager] 2025-05-30 00:27:37.403139 | orchestrator | 2025-05-30 00:27:37.403152 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-30 00:27:37.403165 | orchestrator | Friday 30 May 2025 00:27:37 +0000 (0:00:31.566) 0:00:36.723 ************ 2025-05-30 00:27:49.929508 | orchestrator | changed: [testbed-manager] 2025-05-30 00:27:49.929631 | orchestrator | 2025-05-30 00:27:49.929650 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-30 00:27:49.929663 | orchestrator | Friday 30 May 2025 00:27:49 +0000 (0:00:12.526) 0:00:49.250 ************ 2025-05-30 00:28:50.010352 | orchestrator | Pausing for 60 seconds 2025-05-30 00:28:50.010508 | orchestrator | changed: [testbed-manager] 2025-05-30 00:28:50.010527 | orchestrator | 2025-05-30 00:28:50.010540 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-30 00:28:50.010554 | orchestrator | Friday 30 May 2025 00:28:49 +0000 (0:01:00.079) 0:01:49.329 ************ 2025-05-30 00:28:50.078667 | orchestrator | ok: [testbed-manager] 2025-05-30 00:28:50.079050 | orchestrator | 2025-05-30 00:28:50.081471 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-30 00:28:50.081952 | orchestrator | Friday 30 May 2025 00:28:50 +0000 (0:00:00.074) 0:01:49.403 ************ 2025-05-30 00:28:50.671949 | orchestrator | changed: [testbed-manager] 2025-05-30 00:28:50.672057 | orchestrator | 2025-05-30 00:28:50.672104 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:28:50.672119 | orchestrator | 2025-05-30 00:28:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:28:50.672132 | orchestrator | 2025-05-30 00:28:50 | INFO  | Please wait and do not abort execution. 2025-05-30 00:28:50.672377 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:28:50.672481 | orchestrator | 2025-05-30 00:28:50.673301 | orchestrator | Friday 30 May 2025 00:28:50 +0000 (0:00:00.589) 0:01:49.993 ************ 2025-05-30 00:28:50.673380 | orchestrator | =============================================================================== 2025-05-30 00:28:50.673960 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-30 00:28:50.674172 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.57s 2025-05-30 00:28:50.675274 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.53s 2025-05-30 00:28:50.675398 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.40s 2025-05-30 00:28:50.675418 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.12s 2025-05-30 00:28:50.675659 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.10s 2025-05-30 00:28:50.675959 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.96s 2025-05-30 00:28:50.676428 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2025-05-30 00:28:50.676783 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-05-30 00:28:50.677288 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-05-30 00:28:50.677819 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-05-30 00:28:51.067039 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-30 00:28:51.067139 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-30 00:28:51.070385 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-30 00:28:51.118850 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-30 00:28:51.118945 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-30 00:28:51.118961 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-30 00:28:51.122703 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-30 00:28:51.129180 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-30 00:28:51.132255 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-30 00:28:52.497928 | orchestrator | 2025-05-30 00:28:52 | INFO  | Task 9a478c22-faf1-4bae-8c3f-1593f533eea4 (operator) was prepared for execution. 2025-05-30 00:28:52.498089 | orchestrator | 2025-05-30 00:28:52 | INFO  | It takes a moment until task 9a478c22-faf1-4bae-8c3f-1593f533eea4 (operator) has been started and output is visible here. 2025-05-30 00:28:55.420426 | orchestrator | 2025-05-30 00:28:55.420539 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-30 00:28:55.420853 | orchestrator | 2025-05-30 00:28:55.423498 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-30 00:28:55.423781 | orchestrator | Friday 30 May 2025 00:28:55 +0000 (0:00:00.087) 0:00:00.087 ************ 2025-05-30 00:28:58.701684 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:28:58.705002 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:28:58.705838 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:28:58.706186 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:28:58.707132 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:28:58.709851 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:28:58.710555 | orchestrator | 2025-05-30 00:28:58.711219 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-30 00:28:58.712487 | orchestrator | Friday 30 May 2025 00:28:58 +0000 (0:00:03.286) 0:00:03.373 ************ 2025-05-30 00:28:59.441184 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:28:59.441409 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:28:59.442148 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:28:59.445494 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:28:59.445517 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:28:59.445528 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:28:59.445539 | orchestrator | 2025-05-30 00:28:59.445552 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-30 00:28:59.445565 | orchestrator | 2025-05-30 00:28:59.445594 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-30 00:28:59.446473 | orchestrator | Friday 30 May 2025 00:28:59 +0000 (0:00:00.739) 0:00:04.112 ************ 2025-05-30 00:28:59.501651 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:28:59.528437 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:28:59.546536 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:28:59.587179 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:28:59.590486 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:28:59.590532 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:28:59.590545 | orchestrator | 2025-05-30 00:28:59.590557 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-30 00:28:59.590569 | orchestrator | Friday 30 May 2025 00:28:59 +0000 (0:00:00.146) 0:00:04.259 ************ 2025-05-30 00:28:59.669154 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:28:59.692448 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:28:59.715855 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:28:59.764686 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:28:59.765049 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:28:59.765894 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:28:59.766392 | orchestrator | 2025-05-30 00:28:59.769321 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-30 00:28:59.769347 | orchestrator | Friday 30 May 2025 00:28:59 +0000 (0:00:00.178) 0:00:04.437 ************ 2025-05-30 00:29:00.401570 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:29:00.403926 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:29:00.404550 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:29:00.404870 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:29:00.405303 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:29:00.405660 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:29:00.406180 | orchestrator | 2025-05-30 00:29:00.406422 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-30 00:29:00.406870 | orchestrator | Friday 30 May 2025 00:29:00 +0000 (0:00:00.634) 0:00:05.071 ************ 2025-05-30 00:29:01.351715 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:29:01.354383 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:29:01.354447 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:29:01.354469 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:29:01.354566 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:29:01.355091 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:29:01.355719 | orchestrator | 2025-05-30 00:29:01.356197 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-30 00:29:01.356748 | orchestrator | Friday 30 May 2025 00:29:01 +0000 (0:00:00.951) 0:00:06.023 ************ 2025-05-30 00:29:02.547018 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-30 00:29:02.547162 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-30 00:29:02.548429 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-30 00:29:02.550313 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-30 00:29:02.550714 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-30 00:29:02.552335 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-30 00:29:02.552376 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-30 00:29:02.553565 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-30 00:29:02.554561 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-30 00:29:02.554854 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-30 00:29:02.555144 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-30 00:29:02.555882 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-30 00:29:02.556352 | orchestrator | 2025-05-30 00:29:02.557079 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-30 00:29:02.557404 | orchestrator | Friday 30 May 2025 00:29:02 +0000 (0:00:01.194) 0:00:07.217 ************ 2025-05-30 00:29:03.818013 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:29:03.821222 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:29:03.821294 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:29:03.821307 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:29:03.821654 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:29:03.822289 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:29:03.823505 | orchestrator | 2025-05-30 00:29:03.824157 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-30 00:29:03.825318 | orchestrator | Friday 30 May 2025 00:29:03 +0000 (0:00:01.270) 0:00:08.487 ************ 2025-05-30 00:29:05.058269 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-30 00:29:05.059935 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-30 00:29:05.060897 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-30 00:29:05.141264 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-30 00:29:05.143103 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-30 00:29:05.144893 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-30 00:29:05.146345 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-30 00:29:05.148299 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-30 00:29:05.149319 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-30 00:29:05.150384 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-30 00:29:05.151841 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-30 00:29:05.152906 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-30 00:29:05.153379 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-30 00:29:05.154465 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-30 00:29:05.156199 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-30 00:29:05.157130 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-30 00:29:05.157164 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-30 00:29:05.158161 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-30 00:29:05.159184 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-30 00:29:05.160422 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-30 00:29:05.161488 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-30 00:29:05.162892 | orchestrator | 2025-05-30 00:29:05.163691 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-30 00:29:05.164945 | orchestrator | Friday 30 May 2025 00:29:05 +0000 (0:00:01.325) 0:00:09.812 ************ 2025-05-30 00:29:05.832847 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:29:05.834318 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:29:05.836027 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:29:05.836658 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:29:05.837442 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:29:05.838939 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:29:05.839090 | orchestrator | 2025-05-30 00:29:05.840402 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-30 00:29:05.841053 | orchestrator | Friday 30 May 2025 00:29:05 +0000 (0:00:00.690) 0:00:10.503 ************ 2025-05-30 00:29:05.921877 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:29:05.943855 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:29:05.993333 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:29:05.993826 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:29:05.994280 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:29:05.994912 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:29:05.995413 | orchestrator | 2025-05-30 00:29:05.995865 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-30 00:29:05.996248 | orchestrator | Friday 30 May 2025 00:29:05 +0000 (0:00:00.163) 0:00:10.666 ************ 2025-05-30 00:29:06.707312 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-30 00:29:06.707413 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:29:06.707428 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-30 00:29:06.707440 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:29:06.707452 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-30 00:29:06.709921 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-30 00:29:06.709948 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:29:06.709960 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:29:06.709971 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-30 00:29:06.709982 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-30 00:29:06.710289 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:29:06.710764 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:29:06.711173 | orchestrator | 2025-05-30 00:29:06.711546 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-30 00:29:06.711973 | orchestrator | Friday 30 May 2025 00:29:06 +0000 (0:00:00.707) 0:00:11.374 ************ 2025-05-30 00:29:06.771342 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:29:06.788963 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:29:06.806350 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:29:06.843402 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:29:06.843461 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:29:06.843473 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:29:06.843485 | orchestrator | 2025-05-30 00:29:06.843497 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-30 00:29:06.843628 | orchestrator | Friday 30 May 2025 00:29:06 +0000 (0:00:00.137) 0:00:11.511 ************ 2025-05-30 00:29:06.912530 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:29:06.928725 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:29:06.955504 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:29:06.982541 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:29:06.982659 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:29:06.982682 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:29:06.982700 | orchestrator | 2025-05-30 00:29:06.982721 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-30 00:29:06.982742 | orchestrator | Friday 30 May 2025 00:29:06 +0000 (0:00:00.142) 0:00:11.653 ************ 2025-05-30 00:29:07.019325 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:29:07.044397 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:29:07.074224 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:29:07.102457 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:29:07.139883 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:29:07.140574 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:29:07.145577 | orchestrator | 2025-05-30 00:29:07.145631 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-30 00:29:07.145646 | orchestrator | Friday 30 May 2025 00:29:07 +0000 (0:00:00.156) 0:00:11.810 ************ 2025-05-30 00:29:07.812580 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:29:07.812828 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:29:07.815008 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:29:07.816639 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:29:07.816706 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:29:07.817135 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:29:07.817774 | orchestrator | 2025-05-30 00:29:07.818175 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-30 00:29:07.818718 | orchestrator | Friday 30 May 2025 00:29:07 +0000 (0:00:00.671) 0:00:12.481 ************ 2025-05-30 00:29:07.885692 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:29:07.907520 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:29:07.936480 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:29:08.030691 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:29:08.031323 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:29:08.031956 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:29:08.032897 | orchestrator | 2025-05-30 00:29:08.033969 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:29:08.034661 | orchestrator | 2025-05-30 00:29:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:29:08.034697 | orchestrator | 2025-05-30 00:29:08 | INFO  | Please wait and do not abort execution. 2025-05-30 00:29:08.035394 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-30 00:29:08.037397 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-30 00:29:08.038278 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-30 00:29:08.039162 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-30 00:29:08.039699 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-30 00:29:08.040703 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-30 00:29:08.041718 | orchestrator | 2025-05-30 00:29:08.042445 | orchestrator | Friday 30 May 2025 00:29:08 +0000 (0:00:00.221) 0:00:12.703 ************ 2025-05-30 00:29:08.044216 | orchestrator | =============================================================================== 2025-05-30 00:29:08.045045 | orchestrator | Gathering Facts --------------------------------------------------------- 3.29s 2025-05-30 00:29:08.046311 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.33s 2025-05-30 00:29:08.047262 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2025-05-30 00:29:08.048079 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2025-05-30 00:29:08.048941 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.95s 2025-05-30 00:29:08.049721 | orchestrator | Do not require tty for all users ---------------------------------------- 0.74s 2025-05-30 00:29:08.050125 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2025-05-30 00:29:08.050613 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.69s 2025-05-30 00:29:08.051223 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-05-30 00:29:08.052032 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2025-05-30 00:29:08.052761 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-05-30 00:29:08.053456 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2025-05-30 00:29:08.054091 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-05-30 00:29:08.054447 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-05-30 00:29:08.055010 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2025-05-30 00:29:08.055536 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-05-30 00:29:08.056170 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-05-30 00:29:08.310710 | orchestrator | + osism apply --environment custom facts 2025-05-30 00:29:09.541564 | orchestrator | 2025-05-30 00:29:09 | INFO  | Trying to run play facts in environment custom 2025-05-30 00:29:09.584659 | orchestrator | 2025-05-30 00:29:09 | INFO  | Task ecef77c3-19fe-45bf-b829-71acdd9cf405 (facts) was prepared for execution. 2025-05-30 00:29:09.584748 | orchestrator | 2025-05-30 00:29:09 | INFO  | It takes a moment until task ecef77c3-19fe-45bf-b829-71acdd9cf405 (facts) has been started and output is visible here. 2025-05-30 00:29:12.516538 | orchestrator | 2025-05-30 00:29:12.516655 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-30 00:29:12.521704 | orchestrator | 2025-05-30 00:29:12.523367 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-30 00:29:12.523664 | orchestrator | Friday 30 May 2025 00:29:12 +0000 (0:00:00.081) 0:00:00.081 ************ 2025-05-30 00:29:13.717390 | orchestrator | ok: [testbed-manager] 2025-05-30 00:29:14.755934 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:29:14.756508 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:29:14.757639 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:29:14.759138 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:29:14.761683 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:29:14.762263 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:29:14.762764 | orchestrator | 2025-05-30 00:29:14.765080 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-30 00:29:14.765124 | orchestrator | Friday 30 May 2025 00:29:14 +0000 (0:00:02.240) 0:00:02.322 ************ 2025-05-30 00:29:15.908135 | orchestrator | ok: [testbed-manager] 2025-05-30 00:29:16.774976 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:29:16.775087 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:29:16.775841 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:29:16.776775 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:29:16.779029 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:29:16.779111 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:29:16.779759 | orchestrator | 2025-05-30 00:29:16.780564 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-30 00:29:16.781368 | orchestrator | 2025-05-30 00:29:16.781849 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-30 00:29:16.782454 | orchestrator | Friday 30 May 2025 00:29:16 +0000 (0:00:02.016) 0:00:04.339 ************ 2025-05-30 00:29:16.913838 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:29:16.916014 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:29:16.916069 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:29:16.916090 | orchestrator | 2025-05-30 00:29:16.919533 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-30 00:29:16.920096 | orchestrator | Friday 30 May 2025 00:29:16 +0000 (0:00:00.141) 0:00:04.480 ************ 2025-05-30 00:29:17.040598 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:29:17.040975 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:29:17.042924 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:29:17.044219 | orchestrator | 2025-05-30 00:29:17.048126 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-30 00:29:17.048168 | orchestrator | Friday 30 May 2025 00:29:17 +0000 (0:00:00.125) 0:00:04.606 ************ 2025-05-30 00:29:17.149418 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:29:17.149549 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:29:17.149710 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:29:17.149732 | orchestrator | 2025-05-30 00:29:17.150304 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-30 00:29:17.150479 | orchestrator | Friday 30 May 2025 00:29:17 +0000 (0:00:00.110) 0:00:04.717 ************ 2025-05-30 00:29:17.312298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:29:17.312866 | orchestrator | 2025-05-30 00:29:17.313551 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-30 00:29:17.314152 | orchestrator | Friday 30 May 2025 00:29:17 +0000 (0:00:00.162) 0:00:04.879 ************ 2025-05-30 00:29:17.784124 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:29:17.790127 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:29:17.790174 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:29:17.790187 | orchestrator | 2025-05-30 00:29:17.790200 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-30 00:29:17.790212 | orchestrator | Friday 30 May 2025 00:29:17 +0000 (0:00:00.470) 0:00:05.350 ************ 2025-05-30 00:29:17.886553 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:29:17.887187 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:29:17.888295 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:29:17.888813 | orchestrator | 2025-05-30 00:29:17.889481 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-30 00:29:17.889919 | orchestrator | Friday 30 May 2025 00:29:17 +0000 (0:00:00.102) 0:00:05.452 ************ 2025-05-30 00:29:18.885738 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:29:18.886395 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:29:18.887507 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:29:18.888313 | orchestrator | 2025-05-30 00:29:18.888952 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-30 00:29:18.889618 | orchestrator | Friday 30 May 2025 00:29:18 +0000 (0:00:01.000) 0:00:06.452 ************ 2025-05-30 00:29:19.398195 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:29:19.399018 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:29:19.399787 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:29:19.402528 | orchestrator | 2025-05-30 00:29:19.402969 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-30 00:29:19.403607 | orchestrator | Friday 30 May 2025 00:29:19 +0000 (0:00:00.511) 0:00:06.963 ************ 2025-05-30 00:29:20.438702 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:29:20.438922 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:29:20.439309 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:29:20.439656 | orchestrator | 2025-05-30 00:29:20.440609 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-30 00:29:20.440647 | orchestrator | Friday 30 May 2025 00:29:20 +0000 (0:00:01.040) 0:00:08.004 ************ 2025-05-30 00:29:34.508838 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:29:34.508981 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:29:34.508999 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:29:34.511681 | orchestrator | 2025-05-30 00:29:34.512360 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-30 00:29:34.512944 | orchestrator | Friday 30 May 2025 00:29:34 +0000 (0:00:14.066) 0:00:22.071 ************ 2025-05-30 00:29:34.569389 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:29:34.613475 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:29:34.614227 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:29:34.615286 | orchestrator | 2025-05-30 00:29:34.618436 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-30 00:29:34.618885 | orchestrator | Friday 30 May 2025 00:29:34 +0000 (0:00:00.109) 0:00:22.180 ************ 2025-05-30 00:29:41.751151 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:29:41.751500 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:29:41.754306 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:29:41.754557 | orchestrator | 2025-05-30 00:29:41.757417 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-30 00:29:41.757447 | orchestrator | Friday 30 May 2025 00:29:41 +0000 (0:00:07.132) 0:00:29.313 ************ 2025-05-30 00:29:42.256154 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:29:42.257221 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:29:42.258217 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:29:42.259003 | orchestrator | 2025-05-30 00:29:42.259823 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-30 00:29:42.260974 | orchestrator | Friday 30 May 2025 00:29:42 +0000 (0:00:00.509) 0:00:29.822 ************ 2025-05-30 00:29:45.859125 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-30 00:29:45.859992 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-30 00:29:45.860402 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-30 00:29:45.861309 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-30 00:29:45.862726 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-30 00:29:45.863423 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-30 00:29:45.864095 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-30 00:29:45.865288 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-30 00:29:45.866563 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-30 00:29:45.866589 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-30 00:29:45.867363 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-30 00:29:45.868083 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-30 00:29:45.868808 | orchestrator | 2025-05-30 00:29:45.869392 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-30 00:29:45.870214 | orchestrator | Friday 30 May 2025 00:29:45 +0000 (0:00:03.602) 0:00:33.425 ************ 2025-05-30 00:29:46.897598 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:29:46.898781 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:29:46.898875 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:29:46.899856 | orchestrator | 2025-05-30 00:29:46.901141 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-30 00:29:46.901953 | orchestrator | 2025-05-30 00:29:46.902548 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-30 00:29:46.903342 | orchestrator | Friday 30 May 2025 00:29:46 +0000 (0:00:01.035) 0:00:34.460 ************ 2025-05-30 00:29:48.595419 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:29:51.837844 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:29:51.837951 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:29:51.838149 | orchestrator | ok: [testbed-manager] 2025-05-30 00:29:51.838738 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:29:51.839448 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:29:51.840025 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:29:51.840941 | orchestrator | 2025-05-30 00:29:51.842380 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:29:51.842796 | orchestrator | 2025-05-30 00:29:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:29:51.843204 | orchestrator | 2025-05-30 00:29:51 | INFO  | Please wait and do not abort execution. 2025-05-30 00:29:51.843907 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:29:51.844337 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:29:51.844837 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:29:51.845576 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:29:51.846120 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:29:51.846692 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:29:51.847479 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:29:51.848282 | orchestrator | 2025-05-30 00:29:51.848613 | orchestrator | Friday 30 May 2025 00:29:51 +0000 (0:00:04.944) 0:00:39.404 ************ 2025-05-30 00:29:51.849914 | orchestrator | =============================================================================== 2025-05-30 00:29:51.850525 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.07s 2025-05-30 00:29:51.851773 | orchestrator | Install required packages (Debian) -------------------------------------- 7.13s 2025-05-30 00:29:51.852660 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.94s 2025-05-30 00:29:51.852916 | orchestrator | Copy fact files --------------------------------------------------------- 3.60s 2025-05-30 00:29:51.853625 | orchestrator | Create custom facts directory ------------------------------------------- 2.24s 2025-05-30 00:29:51.854059 | orchestrator | Copy fact file ---------------------------------------------------------- 2.02s 2025-05-30 00:29:51.854879 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2025-05-30 00:29:51.855098 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.04s 2025-05-30 00:29:51.855711 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.00s 2025-05-30 00:29:51.856089 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.51s 2025-05-30 00:29:51.856568 | orchestrator | Create custom facts directory ------------------------------------------- 0.51s 2025-05-30 00:29:51.856863 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2025-05-30 00:29:51.857333 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-05-30 00:29:51.857698 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.14s 2025-05-30 00:29:51.858082 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.13s 2025-05-30 00:29:51.858493 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.11s 2025-05-30 00:29:51.858803 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-05-30 00:29:51.859402 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-05-30 00:29:52.250526 | orchestrator | + osism apply bootstrap 2025-05-30 00:29:53.673463 | orchestrator | 2025-05-30 00:29:53 | INFO  | Task 11fc3f0d-754d-46f5-93e6-f21b6282d5e2 (bootstrap) was prepared for execution. 2025-05-30 00:29:53.673565 | orchestrator | 2025-05-30 00:29:53 | INFO  | It takes a moment until task 11fc3f0d-754d-46f5-93e6-f21b6282d5e2 (bootstrap) has been started and output is visible here. 2025-05-30 00:29:56.805285 | orchestrator | 2025-05-30 00:29:56.805419 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-30 00:29:56.808583 | orchestrator | 2025-05-30 00:29:56.809755 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-30 00:29:56.810918 | orchestrator | Friday 30 May 2025 00:29:56 +0000 (0:00:00.105) 0:00:00.105 ************ 2025-05-30 00:29:56.879930 | orchestrator | ok: [testbed-manager] 2025-05-30 00:29:56.906212 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:29:56.932612 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:29:56.958656 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:29:57.032546 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:29:57.033446 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:29:57.036301 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:29:57.036347 | orchestrator | 2025-05-30 00:29:57.036362 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-30 00:29:57.037430 | orchestrator | 2025-05-30 00:29:57.038474 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-30 00:29:57.039542 | orchestrator | Friday 30 May 2025 00:29:57 +0000 (0:00:00.230) 0:00:00.335 ************ 2025-05-30 00:30:00.612120 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:00.612216 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:00.612384 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:00.612731 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:00.613517 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:00.614242 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:00.614605 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:00.615310 | orchestrator | 2025-05-30 00:30:00.615522 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-30 00:30:00.617104 | orchestrator | 2025-05-30 00:30:00.618222 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-30 00:30:00.619045 | orchestrator | Friday 30 May 2025 00:30:00 +0000 (0:00:03.580) 0:00:03.915 ************ 2025-05-30 00:30:00.713157 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-30 00:30:00.713226 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-30 00:30:00.713239 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-30 00:30:00.713949 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-30 00:30:00.755985 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:30:00.756070 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-30 00:30:00.756083 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-30 00:30:00.756096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:30:00.756107 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:30:00.798638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-30 00:30:00.798721 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-30 00:30:00.798735 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-30 00:30:00.798823 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-30 00:30:00.800081 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-30 00:30:00.800112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-30 00:30:01.057649 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-30 00:30:01.058906 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-30 00:30:01.060404 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-30 00:30:01.061313 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-30 00:30:01.062099 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-30 00:30:01.063079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-30 00:30:01.063909 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-30 00:30:01.064470 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:30:01.065216 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-30 00:30:01.065963 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:30:01.066436 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-30 00:30:01.067223 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-30 00:30:01.067631 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:30:01.068106 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-30 00:30:01.068645 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:30:01.069438 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-30 00:30:01.069985 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-30 00:30:01.070724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:30:01.071194 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-30 00:30:01.071645 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-30 00:30:01.072052 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:30:01.073030 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-30 00:30:01.073330 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-30 00:30:01.073971 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-30 00:30:01.074128 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 00:30:01.074958 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-30 00:30:01.076239 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-30 00:30:01.076344 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 00:30:01.076375 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-30 00:30:01.076829 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:30:01.077439 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-30 00:30:01.081880 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-30 00:30:01.081923 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 00:30:01.084039 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:30:01.084527 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-30 00:30:01.084657 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:30:01.084918 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-30 00:30:01.087707 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-30 00:30:01.087998 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-30 00:30:01.088571 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-30 00:30:01.088837 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:30:01.089604 | orchestrator | 2025-05-30 00:30:01.090128 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-30 00:30:01.090416 | orchestrator | 2025-05-30 00:30:01.090944 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-30 00:30:01.091468 | orchestrator | Friday 30 May 2025 00:30:01 +0000 (0:00:00.444) 0:00:04.360 ************ 2025-05-30 00:30:01.132656 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:01.158467 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:01.188248 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:01.207686 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:01.265179 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:01.265778 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:01.266636 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:01.267398 | orchestrator | 2025-05-30 00:30:01.267856 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-30 00:30:01.269951 | orchestrator | Friday 30 May 2025 00:30:01 +0000 (0:00:00.209) 0:00:04.569 ************ 2025-05-30 00:30:02.518338 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:02.518552 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:02.519311 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:02.521639 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:02.521742 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:02.522076 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:02.522532 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:02.522875 | orchestrator | 2025-05-30 00:30:02.523316 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-30 00:30:02.523739 | orchestrator | Friday 30 May 2025 00:30:02 +0000 (0:00:01.251) 0:00:05.821 ************ 2025-05-30 00:30:03.691913 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:03.693023 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:03.694159 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:03.695482 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:03.696651 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:03.697072 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:03.698303 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:03.700240 | orchestrator | 2025-05-30 00:30:03.700311 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-30 00:30:03.700335 | orchestrator | Friday 30 May 2025 00:30:03 +0000 (0:00:01.171) 0:00:06.992 ************ 2025-05-30 00:30:03.948634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:30:03.949327 | orchestrator | 2025-05-30 00:30:03.950671 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-30 00:30:03.951994 | orchestrator | Friday 30 May 2025 00:30:03 +0000 (0:00:00.258) 0:00:07.251 ************ 2025-05-30 00:30:06.077925 | orchestrator | changed: [testbed-manager] 2025-05-30 00:30:06.079157 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:30:06.079198 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:30:06.079411 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:30:06.080349 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:30:06.080915 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:30:06.082476 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:30:06.083298 | orchestrator | 2025-05-30 00:30:06.084202 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-30 00:30:06.084256 | orchestrator | Friday 30 May 2025 00:30:06 +0000 (0:00:02.128) 0:00:09.379 ************ 2025-05-30 00:30:06.144698 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:30:06.313009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:30:06.313112 | orchestrator | 2025-05-30 00:30:06.313185 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-30 00:30:06.313937 | orchestrator | Friday 30 May 2025 00:30:06 +0000 (0:00:00.235) 0:00:09.615 ************ 2025-05-30 00:30:07.349709 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:30:07.351549 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:30:07.352927 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:30:07.353035 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:30:07.353050 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:30:07.357490 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:30:07.357624 | orchestrator | 2025-05-30 00:30:07.358937 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-30 00:30:07.362782 | orchestrator | Friday 30 May 2025 00:30:07 +0000 (0:00:01.029) 0:00:10.644 ************ 2025-05-30 00:30:07.426173 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:30:08.023377 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:30:08.023751 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:30:08.024670 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:30:08.025565 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:30:08.026211 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:30:08.026593 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:30:08.027338 | orchestrator | 2025-05-30 00:30:08.028042 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-30 00:30:08.028323 | orchestrator | Friday 30 May 2025 00:30:08 +0000 (0:00:00.682) 0:00:11.326 ************ 2025-05-30 00:30:08.120728 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:30:08.151212 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:30:08.181168 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:30:08.433379 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:30:08.433791 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:30:08.435014 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:30:08.436285 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:08.438358 | orchestrator | 2025-05-30 00:30:08.438905 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-30 00:30:08.440367 | orchestrator | Friday 30 May 2025 00:30:08 +0000 (0:00:00.409) 0:00:11.735 ************ 2025-05-30 00:30:08.505584 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:30:08.534327 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:30:08.555992 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:30:08.585900 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:30:08.641608 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:30:08.641698 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:30:08.642104 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:30:08.642547 | orchestrator | 2025-05-30 00:30:08.643142 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-30 00:30:08.643626 | orchestrator | Friday 30 May 2025 00:30:08 +0000 (0:00:00.209) 0:00:11.945 ************ 2025-05-30 00:30:08.952663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:30:08.952790 | orchestrator | 2025-05-30 00:30:08.952802 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-30 00:30:08.953941 | orchestrator | Friday 30 May 2025 00:30:08 +0000 (0:00:00.310) 0:00:12.256 ************ 2025-05-30 00:30:09.234592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:30:09.235625 | orchestrator | 2025-05-30 00:30:09.236390 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-30 00:30:09.237490 | orchestrator | Friday 30 May 2025 00:30:09 +0000 (0:00:00.280) 0:00:12.536 ************ 2025-05-30 00:30:10.517964 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:10.518315 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:10.518421 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:10.519327 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:10.519937 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:10.520084 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:10.520857 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:10.521235 | orchestrator | 2025-05-30 00:30:10.522231 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-30 00:30:10.522682 | orchestrator | Friday 30 May 2025 00:30:10 +0000 (0:00:01.280) 0:00:13.816 ************ 2025-05-30 00:30:10.593665 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:30:10.616624 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:30:10.638633 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:30:10.659763 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:30:10.719230 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:30:10.719470 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:30:10.720458 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:30:10.721382 | orchestrator | 2025-05-30 00:30:10.722376 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-30 00:30:10.722604 | orchestrator | Friday 30 May 2025 00:30:10 +0000 (0:00:00.204) 0:00:14.021 ************ 2025-05-30 00:30:11.237586 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:11.237736 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:11.237755 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:11.237833 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:11.238346 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:11.238578 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:11.239029 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:11.239834 | orchestrator | 2025-05-30 00:30:11.240130 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-30 00:30:11.240387 | orchestrator | Friday 30 May 2025 00:30:11 +0000 (0:00:00.518) 0:00:14.539 ************ 2025-05-30 00:30:11.315445 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:30:11.346926 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:30:11.376767 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:30:11.403534 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:30:11.472383 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:30:11.473594 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:30:11.474965 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:30:11.476221 | orchestrator | 2025-05-30 00:30:11.477199 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-30 00:30:11.478150 | orchestrator | Friday 30 May 2025 00:30:11 +0000 (0:00:00.236) 0:00:14.775 ************ 2025-05-30 00:30:12.013465 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:12.018747 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:30:12.020544 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:30:12.021563 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:30:12.023312 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:30:12.023561 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:30:12.028139 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:30:12.028168 | orchestrator | 2025-05-30 00:30:12.031090 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-30 00:30:12.032217 | orchestrator | Friday 30 May 2025 00:30:12 +0000 (0:00:00.539) 0:00:15.315 ************ 2025-05-30 00:30:13.189539 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:13.190186 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:30:13.191245 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:30:13.191587 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:30:13.192954 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:30:13.193757 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:30:13.194326 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:30:13.195554 | orchestrator | 2025-05-30 00:30:13.196471 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-30 00:30:13.196916 | orchestrator | Friday 30 May 2025 00:30:13 +0000 (0:00:01.176) 0:00:16.491 ************ 2025-05-30 00:30:14.301700 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:14.301817 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:14.302121 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:14.302573 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:14.305073 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:14.306671 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:14.307094 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:14.308416 | orchestrator | 2025-05-30 00:30:14.309391 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-30 00:30:14.309652 | orchestrator | Friday 30 May 2025 00:30:14 +0000 (0:00:01.112) 0:00:17.603 ************ 2025-05-30 00:30:14.600912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:30:14.601732 | orchestrator | 2025-05-30 00:30:14.602428 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-30 00:30:14.603443 | orchestrator | Friday 30 May 2025 00:30:14 +0000 (0:00:00.299) 0:00:17.903 ************ 2025-05-30 00:30:14.668752 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:30:15.982770 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:30:15.982887 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:30:15.986582 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:30:15.987488 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:30:15.987969 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:30:15.988858 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:30:15.989581 | orchestrator | 2025-05-30 00:30:15.990110 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-30 00:30:15.990557 | orchestrator | Friday 30 May 2025 00:30:15 +0000 (0:00:01.380) 0:00:19.284 ************ 2025-05-30 00:30:16.054248 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:16.081329 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:16.112199 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:16.142393 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:16.212696 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:16.212857 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:16.213997 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:16.214899 | orchestrator | 2025-05-30 00:30:16.215620 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-30 00:30:16.216401 | orchestrator | Friday 30 May 2025 00:30:16 +0000 (0:00:00.230) 0:00:19.514 ************ 2025-05-30 00:30:16.282265 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:16.310578 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:16.342618 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:16.369142 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:16.435690 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:16.436156 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:16.437176 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:16.437507 | orchestrator | 2025-05-30 00:30:16.438626 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-30 00:30:16.439202 | orchestrator | Friday 30 May 2025 00:30:16 +0000 (0:00:00.222) 0:00:19.737 ************ 2025-05-30 00:30:16.503830 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:16.529941 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:16.553421 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:16.579334 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:16.633500 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:16.633592 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:16.633838 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:16.634093 | orchestrator | 2025-05-30 00:30:16.635234 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-30 00:30:16.635466 | orchestrator | Friday 30 May 2025 00:30:16 +0000 (0:00:00.200) 0:00:19.937 ************ 2025-05-30 00:30:16.899217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:30:16.899392 | orchestrator | 2025-05-30 00:30:16.899410 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-30 00:30:16.899489 | orchestrator | Friday 30 May 2025 00:30:16 +0000 (0:00:00.264) 0:00:20.202 ************ 2025-05-30 00:30:17.454238 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:17.454429 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:17.460356 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:17.460383 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:17.460387 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:17.460960 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:17.462847 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:17.462856 | orchestrator | 2025-05-30 00:30:17.463581 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-30 00:30:17.464840 | orchestrator | Friday 30 May 2025 00:30:17 +0000 (0:00:00.553) 0:00:20.755 ************ 2025-05-30 00:30:17.527728 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:30:17.578718 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:30:17.605860 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:30:17.667886 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:30:17.669133 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:30:17.670767 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:30:17.672235 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:30:17.673369 | orchestrator | 2025-05-30 00:30:17.674223 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-30 00:30:17.675034 | orchestrator | Friday 30 May 2025 00:30:17 +0000 (0:00:00.215) 0:00:20.970 ************ 2025-05-30 00:30:18.742507 | orchestrator | changed: [testbed-manager] 2025-05-30 00:30:18.744538 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:18.745804 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:18.747186 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:18.749104 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:30:18.749134 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:30:18.749972 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:30:18.750873 | orchestrator | 2025-05-30 00:30:18.751778 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-30 00:30:18.751983 | orchestrator | Friday 30 May 2025 00:30:18 +0000 (0:00:01.071) 0:00:22.042 ************ 2025-05-30 00:30:19.294224 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:19.294484 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:19.294859 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:19.295660 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:19.297500 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:19.297935 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:19.298733 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:19.300322 | orchestrator | 2025-05-30 00:30:19.301398 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-30 00:30:19.302755 | orchestrator | Friday 30 May 2025 00:30:19 +0000 (0:00:00.555) 0:00:22.597 ************ 2025-05-30 00:30:20.373693 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:20.373827 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:20.374843 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:20.374901 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:30:20.375549 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:20.376426 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:30:20.377109 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:30:20.377557 | orchestrator | 2025-05-30 00:30:20.378172 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-30 00:30:20.378969 | orchestrator | Friday 30 May 2025 00:30:20 +0000 (0:00:01.077) 0:00:23.674 ************ 2025-05-30 00:30:33.826775 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:33.827778 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:33.827819 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:33.828930 | orchestrator | changed: [testbed-manager] 2025-05-30 00:30:33.830243 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:30:33.831213 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:30:33.832501 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:30:33.833324 | orchestrator | 2025-05-30 00:30:33.834109 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-30 00:30:33.834545 | orchestrator | Friday 30 May 2025 00:30:33 +0000 (0:00:13.449) 0:00:37.124 ************ 2025-05-30 00:30:33.899442 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:33.928583 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:33.953905 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:33.984160 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:34.049435 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:34.052397 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:34.056302 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:34.056353 | orchestrator | 2025-05-30 00:30:34.056367 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-30 00:30:34.056380 | orchestrator | Friday 30 May 2025 00:30:34 +0000 (0:00:00.228) 0:00:37.352 ************ 2025-05-30 00:30:34.133560 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:34.171535 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:34.203733 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:34.233047 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:34.321751 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:34.322399 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:34.323362 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:34.325796 | orchestrator | 2025-05-30 00:30:34.326482 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-30 00:30:34.327716 | orchestrator | Friday 30 May 2025 00:30:34 +0000 (0:00:00.272) 0:00:37.625 ************ 2025-05-30 00:30:34.413810 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:34.438590 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:34.470515 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:34.494162 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:34.557160 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:34.557415 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:34.560611 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:34.561088 | orchestrator | 2025-05-30 00:30:34.561572 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-30 00:30:34.562371 | orchestrator | Friday 30 May 2025 00:30:34 +0000 (0:00:00.234) 0:00:37.859 ************ 2025-05-30 00:30:34.818394 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:30:34.818496 | orchestrator | 2025-05-30 00:30:34.819372 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-30 00:30:34.820045 | orchestrator | Friday 30 May 2025 00:30:34 +0000 (0:00:00.261) 0:00:38.121 ************ 2025-05-30 00:30:36.422190 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:36.422419 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:36.423711 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:36.423853 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:36.425168 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:36.425270 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:36.426174 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:36.427528 | orchestrator | 2025-05-30 00:30:36.428448 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-30 00:30:36.428866 | orchestrator | Friday 30 May 2025 00:30:36 +0000 (0:00:01.602) 0:00:39.723 ************ 2025-05-30 00:30:37.525775 | orchestrator | changed: [testbed-manager] 2025-05-30 00:30:37.525885 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:30:37.525899 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:30:37.525911 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:30:37.525922 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:30:37.525933 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:30:37.525944 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:30:37.525955 | orchestrator | 2025-05-30 00:30:37.525968 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-30 00:30:37.525980 | orchestrator | Friday 30 May 2025 00:30:37 +0000 (0:00:01.099) 0:00:40.822 ************ 2025-05-30 00:30:38.412051 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:38.412195 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:38.412212 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:38.412223 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:38.412234 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:38.412371 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:38.413055 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:38.413368 | orchestrator | 2025-05-30 00:30:38.413846 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-30 00:30:38.414867 | orchestrator | Friday 30 May 2025 00:30:38 +0000 (0:00:00.889) 0:00:41.712 ************ 2025-05-30 00:30:38.702428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:30:38.705239 | orchestrator | 2025-05-30 00:30:38.705325 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-30 00:30:38.705349 | orchestrator | Friday 30 May 2025 00:30:38 +0000 (0:00:00.291) 0:00:42.003 ************ 2025-05-30 00:30:39.730642 | orchestrator | changed: [testbed-manager] 2025-05-30 00:30:39.730759 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:30:39.730775 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:30:39.730786 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:30:39.732028 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:30:39.732725 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:30:39.733346 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:30:39.734950 | orchestrator | 2025-05-30 00:30:39.735651 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-30 00:30:39.736353 | orchestrator | Friday 30 May 2025 00:30:39 +0000 (0:00:01.020) 0:00:43.024 ************ 2025-05-30 00:30:39.794436 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:30:39.856532 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:30:39.883033 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:30:40.009728 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:30:40.010191 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:30:40.011005 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:30:40.012110 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:30:40.012796 | orchestrator | 2025-05-30 00:30:40.016100 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-30 00:30:40.016146 | orchestrator | Friday 30 May 2025 00:30:40 +0000 (0:00:00.288) 0:00:43.313 ************ 2025-05-30 00:30:52.235674 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:30:52.235775 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:30:52.235785 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:30:52.235889 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:30:52.235901 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:30:52.236406 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:30:52.236471 | orchestrator | changed: [testbed-manager] 2025-05-30 00:30:52.237079 | orchestrator | 2025-05-30 00:30:52.237827 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-30 00:30:52.238163 | orchestrator | Friday 30 May 2025 00:30:52 +0000 (0:00:12.217) 0:00:55.531 ************ 2025-05-30 00:30:53.688911 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:53.689017 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:53.690006 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:53.691279 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:53.691447 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:53.692480 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:53.692681 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:53.693466 | orchestrator | 2025-05-30 00:30:53.693967 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-30 00:30:53.694500 | orchestrator | Friday 30 May 2025 00:30:53 +0000 (0:00:01.458) 0:00:56.990 ************ 2025-05-30 00:30:54.620268 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:54.620497 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:54.625158 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:54.626772 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:54.628069 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:54.628596 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:54.630234 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:54.630894 | orchestrator | 2025-05-30 00:30:54.631807 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-30 00:30:54.632925 | orchestrator | Friday 30 May 2025 00:30:54 +0000 (0:00:00.930) 0:00:57.920 ************ 2025-05-30 00:30:54.719107 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:54.762475 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:54.793395 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:54.825199 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:54.907461 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:54.907691 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:54.907828 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:54.911436 | orchestrator | 2025-05-30 00:30:54.911608 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-30 00:30:54.911628 | orchestrator | Friday 30 May 2025 00:30:54 +0000 (0:00:00.288) 0:00:58.209 ************ 2025-05-30 00:30:54.988438 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:55.010385 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:55.042495 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:55.066654 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:55.149910 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:55.150603 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:55.151318 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:55.151796 | orchestrator | 2025-05-30 00:30:55.152195 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-30 00:30:55.152657 | orchestrator | Friday 30 May 2025 00:30:55 +0000 (0:00:00.243) 0:00:58.452 ************ 2025-05-30 00:30:55.443120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:30:55.443460 | orchestrator | 2025-05-30 00:30:55.444373 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-30 00:30:55.444948 | orchestrator | Friday 30 May 2025 00:30:55 +0000 (0:00:00.291) 0:00:58.744 ************ 2025-05-30 00:30:57.026340 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:57.026445 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:57.027291 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:57.028141 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:57.028891 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:57.029510 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:57.029947 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:57.030645 | orchestrator | 2025-05-30 00:30:57.031273 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-30 00:30:57.031889 | orchestrator | Friday 30 May 2025 00:30:57 +0000 (0:00:01.581) 0:01:00.326 ************ 2025-05-30 00:30:57.568110 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:30:57.568219 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:30:57.568375 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:30:57.568541 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:30:57.569406 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:30:57.569779 | orchestrator | changed: [testbed-manager] 2025-05-30 00:30:57.572710 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:30:57.572776 | orchestrator | 2025-05-30 00:30:57.573108 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-30 00:30:57.573509 | orchestrator | Friday 30 May 2025 00:30:57 +0000 (0:00:00.544) 0:01:00.870 ************ 2025-05-30 00:30:57.648821 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:57.679522 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:57.706262 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:57.728993 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:57.796142 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:57.796462 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:57.797912 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:57.798159 | orchestrator | 2025-05-30 00:30:57.800183 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-30 00:30:57.800668 | orchestrator | Friday 30 May 2025 00:30:57 +0000 (0:00:00.227) 0:01:01.097 ************ 2025-05-30 00:30:58.966873 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:30:58.966980 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:30:58.966995 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:30:58.967958 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:30:58.968340 | orchestrator | ok: [testbed-manager] 2025-05-30 00:30:58.968569 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:30:58.969631 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:30:58.970159 | orchestrator | 2025-05-30 00:30:58.971386 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-30 00:30:58.972107 | orchestrator | Friday 30 May 2025 00:30:58 +0000 (0:00:01.167) 0:01:02.265 ************ 2025-05-30 00:31:00.538500 | orchestrator | changed: [testbed-manager] 2025-05-30 00:31:00.538681 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:31:00.540579 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:31:00.542940 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:31:00.543002 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:31:00.543092 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:31:00.543769 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:31:00.544357 | orchestrator | 2025-05-30 00:31:00.545029 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-30 00:31:00.545516 | orchestrator | Friday 30 May 2025 00:31:00 +0000 (0:00:01.574) 0:01:03.839 ************ 2025-05-30 00:31:02.768592 | orchestrator | ok: [testbed-manager] 2025-05-30 00:31:02.768707 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:31:02.769595 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:31:02.770670 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:31:02.771187 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:31:02.771663 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:31:02.772390 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:31:02.772986 | orchestrator | 2025-05-30 00:31:02.773118 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-30 00:31:02.773671 | orchestrator | Friday 30 May 2025 00:31:02 +0000 (0:00:02.229) 0:01:06.069 ************ 2025-05-30 00:31:40.495027 | orchestrator | ok: [testbed-manager] 2025-05-30 00:31:40.495147 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:31:40.495162 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:31:40.495174 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:31:40.495186 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:31:40.495206 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:31:40.495225 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:31:40.495243 | orchestrator | 2025-05-30 00:31:40.495264 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-30 00:31:40.495283 | orchestrator | Friday 30 May 2025 00:31:40 +0000 (0:00:37.708) 0:01:43.778 ************ 2025-05-30 00:33:02.991286 | orchestrator | changed: [testbed-manager] 2025-05-30 00:33:02.991463 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:33:02.991482 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:33:02.991494 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:33:02.992283 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:33:02.992867 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:33:02.994579 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:33:02.995506 | orchestrator | 2025-05-30 00:33:02.996322 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-30 00:33:02.997412 | orchestrator | Friday 30 May 2025 00:33:02 +0000 (0:01:22.511) 0:03:06.289 ************ 2025-05-30 00:33:04.704996 | orchestrator | ok: [testbed-manager] 2025-05-30 00:33:04.706406 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:33:04.708016 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:33:04.708043 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:33:04.708681 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:33:04.709214 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:33:04.710294 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:33:04.710877 | orchestrator | 2025-05-30 00:33:04.711643 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-30 00:33:04.712403 | orchestrator | Friday 30 May 2025 00:33:04 +0000 (0:00:01.718) 0:03:08.008 ************ 2025-05-30 00:33:16.731232 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:33:16.731356 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:33:16.731406 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:33:16.731419 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:33:16.731598 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:33:16.731619 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:33:16.731639 | orchestrator | changed: [testbed-manager] 2025-05-30 00:33:16.732651 | orchestrator | 2025-05-30 00:33:16.733419 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-30 00:33:16.734469 | orchestrator | Friday 30 May 2025 00:33:16 +0000 (0:00:12.015) 0:03:20.023 ************ 2025-05-30 00:33:17.077121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-30 00:33:17.077572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-30 00:33:17.080089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-30 00:33:17.080122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-30 00:33:17.080135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-30 00:33:17.080544 | orchestrator | 2025-05-30 00:33:17.081262 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-30 00:33:17.082165 | orchestrator | Friday 30 May 2025 00:33:17 +0000 (0:00:00.355) 0:03:20.379 ************ 2025-05-30 00:33:17.127863 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-30 00:33:17.153937 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:33:17.154117 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-30 00:33:17.154605 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-30 00:33:17.177848 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:33:17.212525 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:33:17.213219 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-30 00:33:17.239418 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:33:17.776554 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-30 00:33:17.776764 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-30 00:33:17.777516 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-30 00:33:17.779514 | orchestrator | 2025-05-30 00:33:17.780672 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-30 00:33:17.781122 | orchestrator | Friday 30 May 2025 00:33:17 +0000 (0:00:00.699) 0:03:21.079 ************ 2025-05-30 00:33:17.833570 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-30 00:33:17.834143 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-30 00:33:17.834176 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-30 00:33:17.834188 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-30 00:33:17.834229 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-30 00:33:17.834241 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-30 00:33:17.834517 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-30 00:33:17.866449 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-30 00:33:17.869200 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-30 00:33:17.869247 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-30 00:33:17.869260 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-30 00:33:17.869271 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-30 00:33:17.869282 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-30 00:33:17.869293 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-30 00:33:17.869323 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-30 00:33:17.869732 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-30 00:33:17.870108 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-30 00:33:17.871020 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-30 00:33:17.871146 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-30 00:33:17.940688 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:33:17.940777 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-30 00:33:17.941264 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-30 00:33:17.941536 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-30 00:33:17.944400 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-30 00:33:17.944435 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-30 00:33:17.944448 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-30 00:33:17.944459 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-30 00:33:17.944470 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-30 00:33:17.944481 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-30 00:33:17.944492 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-30 00:33:17.944502 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-30 00:33:17.944513 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-30 00:33:17.944524 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-30 00:33:17.944723 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-30 00:33:17.945051 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-30 00:33:17.945669 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-30 00:33:17.978670 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:33:17.978812 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-30 00:33:17.979025 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-30 00:33:17.979749 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-30 00:33:17.979886 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-30 00:33:17.980054 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-30 00:33:18.006974 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:33:22.377296 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:33:22.377569 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-30 00:33:22.378968 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-30 00:33:22.379584 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-30 00:33:22.380206 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-30 00:33:22.380766 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-30 00:33:22.381076 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-30 00:33:22.381810 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-30 00:33:22.382186 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-30 00:33:22.382666 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-30 00:33:22.383190 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-30 00:33:22.383666 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-30 00:33:22.384116 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-30 00:33:22.385287 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-30 00:33:22.386209 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-30 00:33:22.386671 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-30 00:33:22.386703 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-30 00:33:22.386885 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-30 00:33:22.387368 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-30 00:33:22.388330 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-30 00:33:22.388676 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-30 00:33:22.390369 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-30 00:33:22.391680 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-30 00:33:22.391768 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-30 00:33:22.392191 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-30 00:33:22.392625 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-30 00:33:22.392969 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-30 00:33:22.393487 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-30 00:33:22.393898 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-30 00:33:22.394287 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-30 00:33:22.394919 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-30 00:33:22.395167 | orchestrator | 2025-05-30 00:33:22.395567 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-30 00:33:22.395886 | orchestrator | Friday 30 May 2025 00:33:22 +0000 (0:00:04.597) 0:03:25.676 ************ 2025-05-30 00:33:23.922750 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-30 00:33:23.923052 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-30 00:33:23.924662 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-30 00:33:23.925057 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-30 00:33:23.925735 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-30 00:33:23.925960 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-30 00:33:23.927286 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-30 00:33:23.927311 | orchestrator | 2025-05-30 00:33:23.927647 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-30 00:33:23.928923 | orchestrator | Friday 30 May 2025 00:33:23 +0000 (0:00:01.548) 0:03:27.224 ************ 2025-05-30 00:33:23.983338 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-30 00:33:24.010181 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:33:24.093725 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-30 00:33:24.093848 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-30 00:33:25.369522 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:33:25.370755 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:33:25.372766 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-30 00:33:25.374062 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:33:25.374095 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-30 00:33:25.374528 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-30 00:33:25.375350 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-30 00:33:25.376092 | orchestrator | 2025-05-30 00:33:25.376188 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-30 00:33:25.376716 | orchestrator | Friday 30 May 2025 00:33:25 +0000 (0:00:01.445) 0:03:28.670 ************ 2025-05-30 00:33:25.421871 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-30 00:33:25.457496 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:33:25.523755 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-30 00:33:25.936613 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-30 00:33:25.937762 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:33:25.938183 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:33:25.940046 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-30 00:33:25.941311 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:33:25.941957 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-30 00:33:25.942864 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-30 00:33:25.946937 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-30 00:33:25.949758 | orchestrator | 2025-05-30 00:33:25.949988 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-30 00:33:25.951507 | orchestrator | Friday 30 May 2025 00:33:25 +0000 (0:00:00.569) 0:03:29.239 ************ 2025-05-30 00:33:26.025916 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:33:26.051760 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:33:26.073260 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:33:26.097029 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:33:26.226511 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:33:26.227453 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:33:26.227575 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:33:26.228210 | orchestrator | 2025-05-30 00:33:26.229078 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-30 00:33:26.229161 | orchestrator | Friday 30 May 2025 00:33:26 +0000 (0:00:00.288) 0:03:29.528 ************ 2025-05-30 00:33:32.183854 | orchestrator | ok: [testbed-manager] 2025-05-30 00:33:32.184131 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:33:32.184158 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:33:32.184629 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:33:32.184728 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:33:32.185117 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:33:32.185616 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:33:32.191202 | orchestrator | 2025-05-30 00:33:32.200142 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-30 00:33:32.200192 | orchestrator | Friday 30 May 2025 00:33:32 +0000 (0:00:05.956) 0:03:35.485 ************ 2025-05-30 00:33:32.222607 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-30 00:33:32.257012 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:33:32.294828 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-30 00:33:32.300750 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-30 00:33:32.342586 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:33:32.343299 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-30 00:33:32.392020 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:33:32.393728 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-30 00:33:32.434816 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:33:32.434904 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-30 00:33:32.508778 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:33:32.508949 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:33:32.508968 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-30 00:33:32.509185 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:33:32.509727 | orchestrator | 2025-05-30 00:33:32.509958 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-30 00:33:32.510220 | orchestrator | Friday 30 May 2025 00:33:32 +0000 (0:00:00.326) 0:03:35.812 ************ 2025-05-30 00:33:34.358637 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-30 00:33:34.358792 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-30 00:33:34.359591 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-30 00:33:34.360996 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-30 00:33:34.362598 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-30 00:33:34.363335 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-30 00:33:34.363909 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-30 00:33:34.364361 | orchestrator | 2025-05-30 00:33:34.364988 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-30 00:33:34.365640 | orchestrator | Friday 30 May 2025 00:33:34 +0000 (0:00:01.845) 0:03:37.657 ************ 2025-05-30 00:33:34.762791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:33:34.763541 | orchestrator | 2025-05-30 00:33:34.763893 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-30 00:33:34.768115 | orchestrator | Friday 30 May 2025 00:33:34 +0000 (0:00:00.408) 0:03:38.065 ************ 2025-05-30 00:33:36.183783 | orchestrator | ok: [testbed-manager] 2025-05-30 00:33:36.184862 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:33:36.185273 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:33:36.186077 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:33:36.187541 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:33:36.188703 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:33:36.189647 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:33:36.190573 | orchestrator | 2025-05-30 00:33:36.191176 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-30 00:33:36.192097 | orchestrator | Friday 30 May 2025 00:33:36 +0000 (0:00:01.418) 0:03:39.484 ************ 2025-05-30 00:33:36.844617 | orchestrator | ok: [testbed-manager] 2025-05-30 00:33:36.844784 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:33:36.846576 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:33:36.848247 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:33:36.848341 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:33:36.849245 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:33:36.850110 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:33:36.850409 | orchestrator | 2025-05-30 00:33:36.851211 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-30 00:33:36.851589 | orchestrator | Friday 30 May 2025 00:33:36 +0000 (0:00:00.661) 0:03:40.146 ************ 2025-05-30 00:33:37.482461 | orchestrator | changed: [testbed-manager] 2025-05-30 00:33:37.485039 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:33:37.487090 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:33:37.487121 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:33:37.487133 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:33:37.487460 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:33:37.487932 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:33:37.488461 | orchestrator | 2025-05-30 00:33:37.488870 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-30 00:33:37.490709 | orchestrator | Friday 30 May 2025 00:33:37 +0000 (0:00:00.636) 0:03:40.783 ************ 2025-05-30 00:33:38.134197 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:33:38.136781 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:33:38.136916 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:33:38.136997 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:33:38.142151 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:33:38.142187 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:33:38.142199 | orchestrator | ok: [testbed-manager] 2025-05-30 00:33:38.142211 | orchestrator | 2025-05-30 00:33:38.142578 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-30 00:33:38.142838 | orchestrator | Friday 30 May 2025 00:33:38 +0000 (0:00:00.644) 0:03:41.427 ************ 2025-05-30 00:33:39.255549 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748563442.8611803, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.255722 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748563475.4282534, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.256743 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748563485.1621819, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.257472 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748563488.3724344, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.258418 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748563489.3410091, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.258767 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748563488.0580876, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.259787 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748563496.9847882, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.260362 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748563467.2423763, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.261292 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748563400.9446573, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.262197 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748563398.6171734, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.262532 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748563404.5508447, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.262941 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748563403.6054428, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.263800 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748563393.8153243, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.264538 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748563402.945485, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 00:33:39.264959 | orchestrator | 2025-05-30 00:33:39.266755 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-30 00:33:39.266785 | orchestrator | Friday 30 May 2025 00:33:39 +0000 (0:00:01.129) 0:03:42.556 ************ 2025-05-30 00:33:40.447070 | orchestrator | changed: [testbed-manager] 2025-05-30 00:33:40.447176 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:33:40.448935 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:33:40.449929 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:33:40.451281 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:33:40.452259 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:33:40.457122 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:33:40.460744 | orchestrator | 2025-05-30 00:33:40.460825 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-30 00:33:40.460842 | orchestrator | Friday 30 May 2025 00:33:40 +0000 (0:00:01.191) 0:03:43.747 ************ 2025-05-30 00:33:41.689115 | orchestrator | changed: [testbed-manager] 2025-05-30 00:33:41.689694 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:33:41.690510 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:33:41.691357 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:33:41.693133 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:33:41.693712 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:33:41.694252 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:33:41.695056 | orchestrator | 2025-05-30 00:33:41.695579 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-30 00:33:41.695919 | orchestrator | Friday 30 May 2025 00:33:41 +0000 (0:00:01.242) 0:03:44.990 ************ 2025-05-30 00:33:41.783571 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:33:41.831628 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:33:41.866463 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:33:41.895729 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:33:41.947553 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:33:41.947913 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:33:41.948723 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:33:41.948945 | orchestrator | 2025-05-30 00:33:41.950413 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-30 00:33:41.950438 | orchestrator | Friday 30 May 2025 00:33:41 +0000 (0:00:00.260) 0:03:45.251 ************ 2025-05-30 00:33:42.638705 | orchestrator | ok: [testbed-manager] 2025-05-30 00:33:42.638872 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:33:42.639506 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:33:42.639931 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:33:42.640376 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:33:42.640748 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:33:42.642152 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:33:42.643986 | orchestrator | 2025-05-30 00:33:42.644955 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-30 00:33:42.645580 | orchestrator | Friday 30 May 2025 00:33:42 +0000 (0:00:00.688) 0:03:45.939 ************ 2025-05-30 00:33:42.996866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:33:42.997442 | orchestrator | 2025-05-30 00:33:43.000248 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-30 00:33:43.000277 | orchestrator | Friday 30 May 2025 00:33:42 +0000 (0:00:00.359) 0:03:46.298 ************ 2025-05-30 00:33:51.110637 | orchestrator | ok: [testbed-manager] 2025-05-30 00:33:51.110969 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:33:51.111655 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:33:51.112621 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:33:51.113635 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:33:51.114222 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:33:51.114893 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:33:51.115640 | orchestrator | 2025-05-30 00:33:51.116062 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-30 00:33:51.116866 | orchestrator | Friday 30 May 2025 00:33:51 +0000 (0:00:08.111) 0:03:54.410 ************ 2025-05-30 00:33:52.368636 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:33:52.368804 | orchestrator | ok: [testbed-manager] 2025-05-30 00:33:52.370245 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:33:52.371360 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:33:52.372566 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:33:52.373456 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:33:52.374283 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:33:52.375136 | orchestrator | 2025-05-30 00:33:52.376339 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-30 00:33:52.376964 | orchestrator | Friday 30 May 2025 00:33:52 +0000 (0:00:01.260) 0:03:55.670 ************ 2025-05-30 00:33:53.377977 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:33:53.378604 | orchestrator | ok: [testbed-manager] 2025-05-30 00:33:53.383612 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:33:53.384471 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:33:53.385770 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:33:53.386260 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:33:53.387047 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:33:53.387570 | orchestrator | 2025-05-30 00:33:53.388440 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-30 00:33:53.388779 | orchestrator | Friday 30 May 2025 00:33:53 +0000 (0:00:01.008) 0:03:56.679 ************ 2025-05-30 00:33:53.772965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:33:53.775504 | orchestrator | 2025-05-30 00:33:53.776540 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-30 00:33:53.777424 | orchestrator | Friday 30 May 2025 00:33:53 +0000 (0:00:00.396) 0:03:57.076 ************ 2025-05-30 00:34:02.211955 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:34:02.212076 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:34:02.213158 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:34:02.213453 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:34:02.215513 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:34:02.216058 | orchestrator | changed: [testbed-manager] 2025-05-30 00:34:02.217218 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:34:02.217757 | orchestrator | 2025-05-30 00:34:02.218710 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-30 00:34:02.219367 | orchestrator | Friday 30 May 2025 00:34:02 +0000 (0:00:08.436) 0:04:05.512 ************ 2025-05-30 00:34:02.967655 | orchestrator | changed: [testbed-manager] 2025-05-30 00:34:02.967844 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:34:02.968178 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:34:02.968838 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:34:02.970065 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:34:02.970096 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:34:02.970662 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:34:02.971980 | orchestrator | 2025-05-30 00:34:02.972518 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-30 00:34:02.973195 | orchestrator | Friday 30 May 2025 00:34:02 +0000 (0:00:00.755) 0:04:06.268 ************ 2025-05-30 00:34:04.044860 | orchestrator | changed: [testbed-manager] 2025-05-30 00:34:04.044971 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:34:04.044986 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:34:04.045231 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:34:04.045592 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:34:04.046304 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:34:04.046332 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:34:04.046610 | orchestrator | 2025-05-30 00:34:04.046963 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-30 00:34:04.047284 | orchestrator | Friday 30 May 2025 00:34:04 +0000 (0:00:01.079) 0:04:07.347 ************ 2025-05-30 00:34:05.081503 | orchestrator | changed: [testbed-manager] 2025-05-30 00:34:05.081681 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:34:05.082316 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:34:05.083255 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:34:05.084693 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:34:05.085053 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:34:05.086832 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:34:05.086944 | orchestrator | 2025-05-30 00:34:05.088200 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-30 00:34:05.089020 | orchestrator | Friday 30 May 2025 00:34:05 +0000 (0:00:01.035) 0:04:08.383 ************ 2025-05-30 00:34:05.193375 | orchestrator | ok: [testbed-manager] 2025-05-30 00:34:05.228260 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:34:05.262445 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:34:05.298899 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:34:05.367106 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:34:05.367291 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:34:05.368268 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:34:05.368619 | orchestrator | 2025-05-30 00:34:05.370105 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-30 00:34:05.370650 | orchestrator | Friday 30 May 2025 00:34:05 +0000 (0:00:00.287) 0:04:08.670 ************ 2025-05-30 00:34:05.488028 | orchestrator | ok: [testbed-manager] 2025-05-30 00:34:05.527104 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:34:05.560045 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:34:05.593659 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:34:05.670292 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:34:05.670922 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:34:05.671508 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:34:05.671947 | orchestrator | 2025-05-30 00:34:05.672828 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-30 00:34:05.675712 | orchestrator | Friday 30 May 2025 00:34:05 +0000 (0:00:00.302) 0:04:08.973 ************ 2025-05-30 00:34:05.770760 | orchestrator | ok: [testbed-manager] 2025-05-30 00:34:05.797426 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:34:05.831867 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:34:05.861652 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:34:05.958177 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:34:05.958357 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:34:05.959193 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:34:05.963190 | orchestrator | 2025-05-30 00:34:05.963233 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-30 00:34:05.963247 | orchestrator | Friday 30 May 2025 00:34:05 +0000 (0:00:00.288) 0:04:09.261 ************ 2025-05-30 00:34:11.865384 | orchestrator | ok: [testbed-manager] 2025-05-30 00:34:11.865641 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:34:11.866611 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:34:11.868473 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:34:11.868513 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:34:11.868743 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:34:11.869221 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:34:11.870310 | orchestrator | 2025-05-30 00:34:11.871584 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-30 00:34:11.872282 | orchestrator | Friday 30 May 2025 00:34:11 +0000 (0:00:05.905) 0:04:15.167 ************ 2025-05-30 00:34:12.259577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:34:12.260285 | orchestrator | 2025-05-30 00:34:12.261215 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-30 00:34:12.262643 | orchestrator | Friday 30 May 2025 00:34:12 +0000 (0:00:00.394) 0:04:15.562 ************ 2025-05-30 00:34:12.335674 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-30 00:34:12.335760 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-30 00:34:12.376736 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-30 00:34:12.379499 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:34:12.379921 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-30 00:34:12.381338 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-30 00:34:12.382171 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-30 00:34:12.428962 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:34:12.431173 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-30 00:34:12.432830 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-30 00:34:12.465728 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:34:12.466532 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-30 00:34:12.507551 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-30 00:34:12.508168 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:34:12.508223 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-30 00:34:12.587149 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:34:12.587236 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-30 00:34:12.588359 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:34:12.588478 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-30 00:34:12.589535 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-30 00:34:12.590468 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:34:12.591292 | orchestrator | 2025-05-30 00:34:12.591706 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-30 00:34:12.592428 | orchestrator | Friday 30 May 2025 00:34:12 +0000 (0:00:00.325) 0:04:15.888 ************ 2025-05-30 00:34:12.959319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:34:12.959949 | orchestrator | 2025-05-30 00:34:12.963258 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-30 00:34:12.963295 | orchestrator | Friday 30 May 2025 00:34:12 +0000 (0:00:00.373) 0:04:16.261 ************ 2025-05-30 00:34:13.032042 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-30 00:34:13.070728 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-30 00:34:13.070794 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:34:13.071157 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-30 00:34:13.106834 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:34:13.145525 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:34:13.147143 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-30 00:34:13.147852 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-30 00:34:13.194537 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:34:13.195360 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-30 00:34:13.275953 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:34:13.276475 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:34:13.277147 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-30 00:34:13.278090 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:34:13.278366 | orchestrator | 2025-05-30 00:34:13.279483 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-30 00:34:13.280037 | orchestrator | Friday 30 May 2025 00:34:13 +0000 (0:00:00.318) 0:04:16.579 ************ 2025-05-30 00:34:13.670473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:34:13.671390 | orchestrator | 2025-05-30 00:34:13.675303 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-30 00:34:13.675343 | orchestrator | Friday 30 May 2025 00:34:13 +0000 (0:00:00.393) 0:04:16.973 ************ 2025-05-30 00:34:47.535680 | orchestrator | changed: [testbed-manager] 2025-05-30 00:34:47.535804 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:34:47.535821 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:34:47.535963 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:34:47.535982 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:34:47.536020 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:34:47.537154 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:34:47.538448 | orchestrator | 2025-05-30 00:34:47.539091 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-30 00:34:47.541752 | orchestrator | Friday 30 May 2025 00:34:47 +0000 (0:00:33.855) 0:04:50.828 ************ 2025-05-30 00:34:55.777285 | orchestrator | changed: [testbed-manager] 2025-05-30 00:34:55.777883 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:34:55.777926 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:34:55.778936 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:34:55.780014 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:34:55.780930 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:34:55.781286 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:34:55.782139 | orchestrator | 2025-05-30 00:34:55.783233 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-30 00:34:55.783388 | orchestrator | Friday 30 May 2025 00:34:55 +0000 (0:00:08.238) 0:04:59.067 ************ 2025-05-30 00:35:03.524546 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:35:03.524667 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:35:03.524905 | orchestrator | changed: [testbed-manager] 2025-05-30 00:35:03.527456 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:35:03.527640 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:35:03.528937 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:35:03.529733 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:35:03.530624 | orchestrator | 2025-05-30 00:35:03.531562 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-30 00:35:03.531779 | orchestrator | Friday 30 May 2025 00:35:03 +0000 (0:00:07.756) 0:05:06.824 ************ 2025-05-30 00:35:05.239725 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:35:05.241068 | orchestrator | ok: [testbed-manager] 2025-05-30 00:35:05.241362 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:35:05.243090 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:35:05.243676 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:35:05.244030 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:35:05.244321 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:35:05.244801 | orchestrator | 2025-05-30 00:35:05.245378 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-30 00:35:05.245935 | orchestrator | Friday 30 May 2025 00:35:05 +0000 (0:00:01.717) 0:05:08.541 ************ 2025-05-30 00:35:11.092103 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:35:11.092350 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:35:11.092375 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:35:11.092765 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:35:11.093032 | orchestrator | changed: [testbed-manager] 2025-05-30 00:35:11.093415 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:35:11.094345 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:35:11.097594 | orchestrator | 2025-05-30 00:35:11.098189 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-30 00:35:11.098212 | orchestrator | Friday 30 May 2025 00:35:11 +0000 (0:00:05.851) 0:05:14.393 ************ 2025-05-30 00:35:11.476613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:35:11.477329 | orchestrator | 2025-05-30 00:35:11.478307 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-30 00:35:11.479235 | orchestrator | Friday 30 May 2025 00:35:11 +0000 (0:00:00.386) 0:05:14.779 ************ 2025-05-30 00:35:12.182311 | orchestrator | changed: [testbed-manager] 2025-05-30 00:35:12.182530 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:35:12.183621 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:35:12.183644 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:35:12.183951 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:35:12.184809 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:35:12.184941 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:35:12.186135 | orchestrator | 2025-05-30 00:35:12.186722 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-30 00:35:12.187000 | orchestrator | Friday 30 May 2025 00:35:12 +0000 (0:00:00.704) 0:05:15.484 ************ 2025-05-30 00:35:14.004972 | orchestrator | ok: [testbed-manager] 2025-05-30 00:35:14.005490 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:35:14.005851 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:35:14.006170 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:35:14.007142 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:35:14.007253 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:35:14.008010 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:35:14.008520 | orchestrator | 2025-05-30 00:35:14.009046 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-30 00:35:14.009627 | orchestrator | Friday 30 May 2025 00:35:13 +0000 (0:00:01.821) 0:05:17.305 ************ 2025-05-30 00:35:14.771616 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:35:14.771939 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:35:14.773168 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:35:14.774230 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:35:14.775197 | orchestrator | changed: [testbed-manager] 2025-05-30 00:35:14.776543 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:35:14.777346 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:35:14.778573 | orchestrator | 2025-05-30 00:35:14.779330 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-30 00:35:14.779950 | orchestrator | Friday 30 May 2025 00:35:14 +0000 (0:00:00.768) 0:05:18.074 ************ 2025-05-30 00:35:14.850933 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:35:14.896611 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:35:14.930160 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:35:14.963697 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:35:15.006745 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:35:15.074851 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:35:15.075513 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:35:15.076222 | orchestrator | 2025-05-30 00:35:15.077253 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-30 00:35:15.077993 | orchestrator | Friday 30 May 2025 00:35:15 +0000 (0:00:00.303) 0:05:18.377 ************ 2025-05-30 00:35:15.148718 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:35:15.181411 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:35:15.213477 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:35:15.244766 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:35:15.291802 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:35:15.480169 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:35:15.480365 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:35:15.481409 | orchestrator | 2025-05-30 00:35:15.484279 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-30 00:35:15.484310 | orchestrator | Friday 30 May 2025 00:35:15 +0000 (0:00:00.405) 0:05:18.782 ************ 2025-05-30 00:35:15.557047 | orchestrator | ok: [testbed-manager] 2025-05-30 00:35:15.601487 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:35:15.677012 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:35:15.720243 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:35:15.781730 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:35:15.781982 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:35:15.783161 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:35:15.784142 | orchestrator | 2025-05-30 00:35:15.785230 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-30 00:35:15.786105 | orchestrator | Friday 30 May 2025 00:35:15 +0000 (0:00:00.300) 0:05:19.083 ************ 2025-05-30 00:35:15.898856 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:35:15.939959 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:35:15.968607 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:35:16.002388 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:35:16.075621 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:35:16.076157 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:35:16.076631 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:35:16.077239 | orchestrator | 2025-05-30 00:35:16.078405 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-30 00:35:16.080650 | orchestrator | Friday 30 May 2025 00:35:16 +0000 (0:00:00.295) 0:05:19.379 ************ 2025-05-30 00:35:16.176601 | orchestrator | ok: [testbed-manager] 2025-05-30 00:35:16.225755 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:35:16.261922 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:35:16.297109 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:35:16.378633 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:35:16.378788 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:35:16.379202 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:35:16.379583 | orchestrator | 2025-05-30 00:35:16.380015 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-30 00:35:16.380405 | orchestrator | Friday 30 May 2025 00:35:16 +0000 (0:00:00.302) 0:05:19.682 ************ 2025-05-30 00:35:16.479175 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:35:16.514575 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:35:16.547296 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:35:16.576623 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:35:16.629471 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:35:16.630387 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:35:16.633295 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:35:16.633319 | orchestrator | 2025-05-30 00:35:16.633332 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-30 00:35:16.633620 | orchestrator | Friday 30 May 2025 00:35:16 +0000 (0:00:00.251) 0:05:19.933 ************ 2025-05-30 00:35:16.749297 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:35:16.784391 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:35:16.814250 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:35:16.844773 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:35:16.895698 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:35:16.896659 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:35:16.901234 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:35:16.901517 | orchestrator | 2025-05-30 00:35:16.902346 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-30 00:35:16.902839 | orchestrator | Friday 30 May 2025 00:35:16 +0000 (0:00:00.265) 0:05:20.198 ************ 2025-05-30 00:35:17.415114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:35:17.415194 | orchestrator | 2025-05-30 00:35:17.415952 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-30 00:35:17.417243 | orchestrator | Friday 30 May 2025 00:35:17 +0000 (0:00:00.509) 0:05:20.708 ************ 2025-05-30 00:35:18.305067 | orchestrator | ok: [testbed-manager] 2025-05-30 00:35:18.305370 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:35:18.306224 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:35:18.307133 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:35:18.307852 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:35:18.309295 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:35:18.309327 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:35:18.310064 | orchestrator | 2025-05-30 00:35:18.310477 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-30 00:35:18.311729 | orchestrator | Friday 30 May 2025 00:35:18 +0000 (0:00:00.896) 0:05:21.604 ************ 2025-05-30 00:35:21.043347 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:35:21.044389 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:35:21.045506 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:35:21.047385 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:35:21.047507 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:35:21.048765 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:35:21.049878 | orchestrator | ok: [testbed-manager] 2025-05-30 00:35:21.050827 | orchestrator | 2025-05-30 00:35:21.051528 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-30 00:35:21.052409 | orchestrator | Friday 30 May 2025 00:35:21 +0000 (0:00:02.741) 0:05:24.346 ************ 2025-05-30 00:35:21.120318 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-30 00:35:21.121032 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-30 00:35:21.194933 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-30 00:35:21.195572 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-30 00:35:21.196735 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-30 00:35:21.197676 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-30 00:35:21.270289 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:35:21.271721 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-30 00:35:21.272042 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-30 00:35:21.272820 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-30 00:35:21.374368 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:35:21.375027 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-30 00:35:21.379209 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-30 00:35:21.379236 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-30 00:35:21.442644 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:35:21.443235 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-30 00:35:21.444184 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-30 00:35:21.509314 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-30 00:35:21.510243 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:35:21.513264 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-30 00:35:21.513322 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-30 00:35:21.513333 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-30 00:35:21.658157 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:35:21.658552 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:35:21.659247 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-30 00:35:21.660460 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-30 00:35:21.660948 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-30 00:35:21.661538 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:35:21.662244 | orchestrator | 2025-05-30 00:35:21.663036 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-30 00:35:21.664013 | orchestrator | Friday 30 May 2025 00:35:21 +0000 (0:00:00.615) 0:05:24.962 ************ 2025-05-30 00:35:33.537660 | orchestrator | ok: [testbed-manager] 2025-05-30 00:35:33.537765 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:35:33.538369 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:35:33.538415 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:35:33.540111 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:35:33.541029 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:35:33.541997 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:35:33.542841 | orchestrator | 2025-05-30 00:35:33.544000 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-30 00:35:33.544164 | orchestrator | Friday 30 May 2025 00:35:33 +0000 (0:00:11.873) 0:05:36.835 ************ 2025-05-30 00:35:34.720770 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:35:34.721210 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:35:34.722275 | orchestrator | ok: [testbed-manager] 2025-05-30 00:35:34.725335 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:35:34.725371 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:35:34.725786 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:35:34.726219 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:35:34.726752 | orchestrator | 2025-05-30 00:35:34.727282 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-30 00:35:34.727935 | orchestrator | Friday 30 May 2025 00:35:34 +0000 (0:00:01.184) 0:05:38.020 ************ 2025-05-30 00:35:42.271224 | orchestrator | ok: [testbed-manager] 2025-05-30 00:35:42.271881 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:35:42.275212 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:35:42.275254 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:35:42.275266 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:35:42.275277 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:35:42.275570 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:35:42.276278 | orchestrator | 2025-05-30 00:35:42.276405 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-30 00:35:42.278893 | orchestrator | Friday 30 May 2025 00:35:42 +0000 (0:00:07.552) 0:05:45.572 ************ 2025-05-30 00:35:45.506907 | orchestrator | changed: [testbed-manager] 2025-05-30 00:35:45.507064 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:35:45.507082 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:35:45.507094 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:35:45.507174 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:35:45.507720 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:35:45.509465 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:35:45.509549 | orchestrator | 2025-05-30 00:35:45.510433 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-30 00:35:45.510487 | orchestrator | Friday 30 May 2025 00:35:45 +0000 (0:00:03.229) 0:05:48.801 ************ 2025-05-30 00:35:46.780860 | orchestrator | ok: [testbed-manager] 2025-05-30 00:35:46.781995 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:35:46.782690 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:35:46.783780 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:35:46.786841 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:35:46.787048 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:35:46.787942 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:35:46.788259 | orchestrator | 2025-05-30 00:35:46.788896 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-30 00:35:46.789316 | orchestrator | Friday 30 May 2025 00:35:46 +0000 (0:00:01.279) 0:05:50.081 ************ 2025-05-30 00:35:48.334648 | orchestrator | ok: [testbed-manager] 2025-05-30 00:35:48.334802 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:35:48.335405 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:35:48.336941 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:35:48.337331 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:35:48.337917 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:35:48.340406 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:35:48.341057 | orchestrator | 2025-05-30 00:35:48.341720 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-30 00:35:48.342424 | orchestrator | Friday 30 May 2025 00:35:48 +0000 (0:00:01.553) 0:05:51.635 ************ 2025-05-30 00:35:48.544639 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:35:48.606586 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:35:48.670835 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:35:48.747190 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:35:48.926237 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:35:48.926322 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:35:48.927840 | orchestrator | changed: [testbed-manager] 2025-05-30 00:35:48.928779 | orchestrator | 2025-05-30 00:35:48.929657 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-30 00:35:48.930589 | orchestrator | Friday 30 May 2025 00:35:48 +0000 (0:00:00.592) 0:05:52.227 ************ 2025-05-30 00:35:58.682290 | orchestrator | ok: [testbed-manager] 2025-05-30 00:35:58.682433 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:35:58.682532 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:35:58.685211 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:35:58.686805 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:35:58.687416 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:35:58.687852 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:35:58.688676 | orchestrator | 2025-05-30 00:35:58.688940 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-30 00:35:58.692182 | orchestrator | Friday 30 May 2025 00:35:58 +0000 (0:00:09.752) 0:06:01.980 ************ 2025-05-30 00:35:59.598763 | orchestrator | changed: [testbed-manager] 2025-05-30 00:35:59.598885 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:35:59.599898 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:35:59.601861 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:35:59.603105 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:35:59.603689 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:35:59.604506 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:35:59.605250 | orchestrator | 2025-05-30 00:35:59.605851 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-30 00:35:59.606808 | orchestrator | Friday 30 May 2025 00:35:59 +0000 (0:00:00.920) 0:06:02.901 ************ 2025-05-30 00:36:12.044154 | orchestrator | ok: [testbed-manager] 2025-05-30 00:36:12.044282 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:36:12.046235 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:36:12.046263 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:36:12.047361 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:36:12.049230 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:36:12.051085 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:36:12.055396 | orchestrator | 2025-05-30 00:36:12.055429 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-30 00:36:12.055443 | orchestrator | Friday 30 May 2025 00:36:12 +0000 (0:00:12.440) 0:06:15.342 ************ 2025-05-30 00:36:24.432839 | orchestrator | ok: [testbed-manager] 2025-05-30 00:36:24.432962 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:36:24.433451 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:36:24.434235 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:36:24.436579 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:36:24.437450 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:36:24.438180 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:36:24.439057 | orchestrator | 2025-05-30 00:36:24.439762 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-30 00:36:24.440521 | orchestrator | Friday 30 May 2025 00:36:24 +0000 (0:00:12.387) 0:06:27.729 ************ 2025-05-30 00:36:24.799718 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-30 00:36:25.644212 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-30 00:36:25.645283 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-30 00:36:25.645970 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-30 00:36:25.646567 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-30 00:36:25.647974 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-30 00:36:25.648992 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-30 00:36:25.650282 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-30 00:36:25.651304 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-30 00:36:25.652038 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-30 00:36:25.652732 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-30 00:36:25.653356 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-30 00:36:25.654139 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-30 00:36:25.654662 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-30 00:36:25.655777 | orchestrator | 2025-05-30 00:36:25.655816 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-30 00:36:25.656067 | orchestrator | Friday 30 May 2025 00:36:25 +0000 (0:00:01.216) 0:06:28.946 ************ 2025-05-30 00:36:25.774362 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:36:25.847599 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:36:25.912755 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:36:26.072995 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:36:26.195734 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:36:26.195992 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:36:26.196033 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:36:26.198487 | orchestrator | 2025-05-30 00:36:26.198681 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-30 00:36:26.199159 | orchestrator | Friday 30 May 2025 00:36:26 +0000 (0:00:00.549) 0:06:29.495 ************ 2025-05-30 00:36:29.874964 | orchestrator | ok: [testbed-manager] 2025-05-30 00:36:29.875735 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:36:29.875854 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:36:29.877556 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:36:29.879137 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:36:29.879880 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:36:29.880724 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:36:29.881082 | orchestrator | 2025-05-30 00:36:29.881589 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-30 00:36:29.882323 | orchestrator | Friday 30 May 2025 00:36:29 +0000 (0:00:03.678) 0:06:33.174 ************ 2025-05-30 00:36:30.026076 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:36:30.093215 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:36:30.165713 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:36:30.479222 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:36:30.547061 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:36:30.647864 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:36:30.648065 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:36:30.648978 | orchestrator | 2025-05-30 00:36:30.649893 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-30 00:36:30.650260 | orchestrator | Friday 30 May 2025 00:36:30 +0000 (0:00:00.773) 0:06:33.948 ************ 2025-05-30 00:36:30.730383 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-30 00:36:30.730517 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-30 00:36:30.804900 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:36:30.805047 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-30 00:36:30.805648 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-30 00:36:30.875731 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:36:30.875871 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-30 00:36:30.875886 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-30 00:36:30.955231 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:36:30.955390 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-30 00:36:30.955876 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-30 00:36:31.029260 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:36:31.029365 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-30 00:36:31.029672 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-30 00:36:31.103068 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:36:31.103159 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-30 00:36:31.103429 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-30 00:36:31.225107 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:36:31.225184 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-30 00:36:31.225197 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-30 00:36:31.225776 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:36:31.225798 | orchestrator | 2025-05-30 00:36:31.226641 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-30 00:36:31.226834 | orchestrator | Friday 30 May 2025 00:36:31 +0000 (0:00:00.580) 0:06:34.528 ************ 2025-05-30 00:36:31.365817 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:36:31.443956 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:36:31.514989 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:36:31.583551 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:36:31.667680 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:36:31.785253 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:36:31.785458 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:36:31.786783 | orchestrator | 2025-05-30 00:36:31.787718 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-30 00:36:31.788342 | orchestrator | Friday 30 May 2025 00:36:31 +0000 (0:00:00.557) 0:06:35.085 ************ 2025-05-30 00:36:31.923422 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:36:31.988699 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:36:32.053071 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:36:32.121793 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:36:32.185841 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:36:32.292050 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:36:32.292973 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:36:32.294187 | orchestrator | 2025-05-30 00:36:32.295098 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-30 00:36:32.295881 | orchestrator | Friday 30 May 2025 00:36:32 +0000 (0:00:00.506) 0:06:35.592 ************ 2025-05-30 00:36:32.432016 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:36:32.572151 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:36:32.637387 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:36:32.704725 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:36:32.832662 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:36:32.833179 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:36:32.834137 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:36:32.838441 | orchestrator | 2025-05-30 00:36:32.838498 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-30 00:36:32.838512 | orchestrator | Friday 30 May 2025 00:36:32 +0000 (0:00:00.543) 0:06:36.136 ************ 2025-05-30 00:36:38.961243 | orchestrator | ok: [testbed-manager] 2025-05-30 00:36:38.961362 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:36:38.962321 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:36:38.962620 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:36:38.964645 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:36:38.965605 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:36:38.966661 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:36:38.967368 | orchestrator | 2025-05-30 00:36:38.968453 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-30 00:36:38.970603 | orchestrator | Friday 30 May 2025 00:36:38 +0000 (0:00:06.124) 0:06:42.261 ************ 2025-05-30 00:36:39.780584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:36:39.781298 | orchestrator | 2025-05-30 00:36:39.782738 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-30 00:36:39.784491 | orchestrator | Friday 30 May 2025 00:36:39 +0000 (0:00:00.821) 0:06:43.082 ************ 2025-05-30 00:36:40.266159 | orchestrator | ok: [testbed-manager] 2025-05-30 00:36:40.693337 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:36:40.693608 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:36:40.695826 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:36:40.696356 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:36:40.699248 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:36:40.699299 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:36:40.699311 | orchestrator | 2025-05-30 00:36:40.701067 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-30 00:36:40.702164 | orchestrator | Friday 30 May 2025 00:36:40 +0000 (0:00:00.910) 0:06:43.993 ************ 2025-05-30 00:36:41.162552 | orchestrator | ok: [testbed-manager] 2025-05-30 00:36:41.590233 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:36:41.591340 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:36:41.591667 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:36:41.591911 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:36:41.593372 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:36:41.594347 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:36:41.595029 | orchestrator | 2025-05-30 00:36:41.595671 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-30 00:36:41.597421 | orchestrator | Friday 30 May 2025 00:36:41 +0000 (0:00:00.899) 0:06:44.893 ************ 2025-05-30 00:36:43.125804 | orchestrator | ok: [testbed-manager] 2025-05-30 00:36:43.126341 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:36:43.129379 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:36:43.129422 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:36:43.129434 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:36:43.129446 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:36:43.130616 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:36:43.132368 | orchestrator | 2025-05-30 00:36:43.132993 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-30 00:36:43.133719 | orchestrator | Friday 30 May 2025 00:36:43 +0000 (0:00:01.532) 0:06:46.426 ************ 2025-05-30 00:36:43.282536 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:36:44.497172 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:36:44.497289 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:36:44.497709 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:36:44.498344 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:36:44.501455 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:36:44.502247 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:36:44.502606 | orchestrator | 2025-05-30 00:36:44.503454 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-30 00:36:44.504323 | orchestrator | Friday 30 May 2025 00:36:44 +0000 (0:00:01.370) 0:06:47.796 ************ 2025-05-30 00:36:45.880328 | orchestrator | ok: [testbed-manager] 2025-05-30 00:36:45.880949 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:36:45.882915 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:36:45.883377 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:36:45.883742 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:36:45.884589 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:36:45.885809 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:36:45.887011 | orchestrator | 2025-05-30 00:36:45.888482 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-30 00:36:45.889497 | orchestrator | Friday 30 May 2025 00:36:45 +0000 (0:00:01.385) 0:06:49.181 ************ 2025-05-30 00:36:47.270978 | orchestrator | changed: [testbed-manager] 2025-05-30 00:36:47.271453 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:36:47.272253 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:36:47.272279 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:36:47.273057 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:36:47.274790 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:36:47.275171 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:36:47.275659 | orchestrator | 2025-05-30 00:36:47.276131 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-30 00:36:47.276600 | orchestrator | Friday 30 May 2025 00:36:47 +0000 (0:00:01.388) 0:06:50.569 ************ 2025-05-30 00:36:48.407567 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:36:48.408414 | orchestrator | 2025-05-30 00:36:48.408450 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-30 00:36:48.408465 | orchestrator | Friday 30 May 2025 00:36:48 +0000 (0:00:01.137) 0:06:51.707 ************ 2025-05-30 00:36:49.741797 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:36:49.742153 | orchestrator | ok: [testbed-manager] 2025-05-30 00:36:49.742993 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:36:49.743327 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:36:49.744387 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:36:49.744869 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:36:49.745906 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:36:49.745959 | orchestrator | 2025-05-30 00:36:49.747463 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-30 00:36:49.748105 | orchestrator | Friday 30 May 2025 00:36:49 +0000 (0:00:01.335) 0:06:53.042 ************ 2025-05-30 00:36:50.880767 | orchestrator | ok: [testbed-manager] 2025-05-30 00:36:50.881122 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:36:50.882176 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:36:50.883637 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:36:50.883965 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:36:50.884825 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:36:50.886383 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:36:50.886835 | orchestrator | 2025-05-30 00:36:50.887987 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-30 00:36:50.888678 | orchestrator | Friday 30 May 2025 00:36:50 +0000 (0:00:01.137) 0:06:54.180 ************ 2025-05-30 00:36:52.046612 | orchestrator | ok: [testbed-manager] 2025-05-30 00:36:52.047296 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:36:52.048205 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:36:52.050233 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:36:52.050442 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:36:52.051522 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:36:52.052690 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:36:52.053227 | orchestrator | 2025-05-30 00:36:52.053995 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-30 00:36:52.054545 | orchestrator | Friday 30 May 2025 00:36:52 +0000 (0:00:01.167) 0:06:55.348 ************ 2025-05-30 00:36:53.336016 | orchestrator | ok: [testbed-manager] 2025-05-30 00:36:53.336861 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:36:53.337574 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:36:53.339237 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:36:53.339861 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:36:53.340599 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:36:53.341441 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:36:53.342315 | orchestrator | 2025-05-30 00:36:53.342793 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-30 00:36:53.343177 | orchestrator | Friday 30 May 2025 00:36:53 +0000 (0:00:01.288) 0:06:56.636 ************ 2025-05-30 00:36:54.584844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:36:54.585870 | orchestrator | 2025-05-30 00:36:54.589094 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-30 00:36:54.589128 | orchestrator | Friday 30 May 2025 00:36:54 +0000 (0:00:00.923) 0:06:57.560 ************ 2025-05-30 00:36:54.590608 | orchestrator | 2025-05-30 00:36:54.590716 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-30 00:36:54.591783 | orchestrator | Friday 30 May 2025 00:36:54 +0000 (0:00:00.047) 0:06:57.607 ************ 2025-05-30 00:36:54.592302 | orchestrator | 2025-05-30 00:36:54.593001 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-30 00:36:54.593505 | orchestrator | Friday 30 May 2025 00:36:54 +0000 (0:00:00.039) 0:06:57.647 ************ 2025-05-30 00:36:54.593896 | orchestrator | 2025-05-30 00:36:54.594383 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-30 00:36:54.595069 | orchestrator | Friday 30 May 2025 00:36:54 +0000 (0:00:00.041) 0:06:57.688 ************ 2025-05-30 00:36:54.596226 | orchestrator | 2025-05-30 00:36:54.596728 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-30 00:36:54.597158 | orchestrator | Friday 30 May 2025 00:36:54 +0000 (0:00:00.051) 0:06:57.740 ************ 2025-05-30 00:36:54.597657 | orchestrator | 2025-05-30 00:36:54.598329 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-30 00:36:54.601330 | orchestrator | Friday 30 May 2025 00:36:54 +0000 (0:00:00.041) 0:06:57.781 ************ 2025-05-30 00:36:54.601514 | orchestrator | 2025-05-30 00:36:54.602061 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-30 00:36:54.602391 | orchestrator | Friday 30 May 2025 00:36:54 +0000 (0:00:00.053) 0:06:57.835 ************ 2025-05-30 00:36:54.602865 | orchestrator | 2025-05-30 00:36:54.612000 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-30 00:36:54.612158 | orchestrator | Friday 30 May 2025 00:36:54 +0000 (0:00:00.050) 0:06:57.886 ************ 2025-05-30 00:36:55.711189 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:36:55.711886 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:36:55.712395 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:36:55.712908 | orchestrator | 2025-05-30 00:36:55.713425 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-30 00:36:55.713802 | orchestrator | Friday 30 May 2025 00:36:55 +0000 (0:00:01.128) 0:06:59.014 ************ 2025-05-30 00:36:57.385090 | orchestrator | changed: [testbed-manager] 2025-05-30 00:36:57.385641 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:36:57.385953 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:36:57.387437 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:36:57.388382 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:36:57.389381 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:36:57.390804 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:36:57.390981 | orchestrator | 2025-05-30 00:36:57.391542 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-30 00:36:57.392028 | orchestrator | Friday 30 May 2025 00:36:57 +0000 (0:00:01.670) 0:07:00.684 ************ 2025-05-30 00:36:58.526213 | orchestrator | changed: [testbed-manager] 2025-05-30 00:36:58.526376 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:36:58.526955 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:36:58.528314 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:36:58.528577 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:36:58.529131 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:36:58.529748 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:36:58.530600 | orchestrator | 2025-05-30 00:36:58.530922 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-30 00:36:58.531625 | orchestrator | Friday 30 May 2025 00:36:58 +0000 (0:00:01.139) 0:07:01.824 ************ 2025-05-30 00:36:58.663432 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:37:00.776677 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:37:00.776862 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:37:00.777526 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:37:00.779073 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:37:00.779099 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:37:00.779112 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:37:00.780698 | orchestrator | 2025-05-30 00:37:00.780930 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-30 00:37:00.781817 | orchestrator | Friday 30 May 2025 00:37:00 +0000 (0:00:02.250) 0:07:04.075 ************ 2025-05-30 00:37:00.880351 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:37:00.880858 | orchestrator | 2025-05-30 00:37:00.881187 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-30 00:37:00.882274 | orchestrator | Friday 30 May 2025 00:37:00 +0000 (0:00:00.106) 0:07:04.181 ************ 2025-05-30 00:37:01.902714 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:01.903414 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:37:01.904260 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:37:01.904937 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:37:01.906734 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:37:01.906763 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:37:01.907042 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:37:01.907634 | orchestrator | 2025-05-30 00:37:01.908015 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-30 00:37:01.908329 | orchestrator | Friday 30 May 2025 00:37:01 +0000 (0:00:01.020) 0:07:05.202 ************ 2025-05-30 00:37:02.040821 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:37:02.123871 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:37:02.189442 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:37:02.268236 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:37:02.528590 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:37:02.664127 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:37:02.664228 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:37:02.664243 | orchestrator | 2025-05-30 00:37:02.665403 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-30 00:37:02.665629 | orchestrator | Friday 30 May 2025 00:37:02 +0000 (0:00:00.762) 0:07:05.965 ************ 2025-05-30 00:37:03.540568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:37:03.540674 | orchestrator | 2025-05-30 00:37:03.544365 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-30 00:37:03.544405 | orchestrator | Friday 30 May 2025 00:37:03 +0000 (0:00:00.874) 0:07:06.840 ************ 2025-05-30 00:37:04.371077 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:04.371297 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:04.371693 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:04.371993 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:04.372708 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:04.373188 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:04.373686 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:04.374221 | orchestrator | 2025-05-30 00:37:04.375370 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-30 00:37:04.376140 | orchestrator | Friday 30 May 2025 00:37:04 +0000 (0:00:00.831) 0:07:07.671 ************ 2025-05-30 00:37:06.987624 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-30 00:37:06.992447 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-30 00:37:06.992508 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-30 00:37:06.992522 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-30 00:37:06.992534 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-30 00:37:06.992546 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-30 00:37:06.995898 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-30 00:37:06.996771 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-30 00:37:06.997980 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-30 00:37:06.998756 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-30 00:37:06.999355 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-30 00:37:07.000053 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-30 00:37:07.000651 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-30 00:37:07.001625 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-30 00:37:07.001666 | orchestrator | 2025-05-30 00:37:07.002233 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-30 00:37:07.002778 | orchestrator | Friday 30 May 2025 00:37:06 +0000 (0:00:02.614) 0:07:10.286 ************ 2025-05-30 00:37:07.120963 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:37:07.201321 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:37:07.267851 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:37:07.335821 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:37:07.412969 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:37:07.509320 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:37:07.509999 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:37:07.510833 | orchestrator | 2025-05-30 00:37:07.516165 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-30 00:37:07.516191 | orchestrator | Friday 30 May 2025 00:37:07 +0000 (0:00:00.523) 0:07:10.810 ************ 2025-05-30 00:37:08.350556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:37:08.351135 | orchestrator | 2025-05-30 00:37:08.351310 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-30 00:37:08.351645 | orchestrator | Friday 30 May 2025 00:37:08 +0000 (0:00:00.840) 0:07:11.650 ************ 2025-05-30 00:37:09.180812 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:09.180946 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:09.180973 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:09.181275 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:09.183470 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:09.183530 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:09.184367 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:09.184719 | orchestrator | 2025-05-30 00:37:09.185917 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-30 00:37:09.187817 | orchestrator | Friday 30 May 2025 00:37:09 +0000 (0:00:00.828) 0:07:12.479 ************ 2025-05-30 00:37:09.610716 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:09.679139 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:10.205600 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:10.206259 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:10.209105 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:10.209908 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:10.210708 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:10.211552 | orchestrator | 2025-05-30 00:37:10.212413 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-30 00:37:10.212969 | orchestrator | Friday 30 May 2025 00:37:10 +0000 (0:00:01.026) 0:07:13.506 ************ 2025-05-30 00:37:10.358304 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:37:10.434844 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:37:10.500994 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:37:10.571841 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:37:10.639005 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:37:10.735909 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:37:10.736629 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:37:10.737125 | orchestrator | 2025-05-30 00:37:10.738356 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-30 00:37:10.741943 | orchestrator | Friday 30 May 2025 00:37:10 +0000 (0:00:00.532) 0:07:14.038 ************ 2025-05-30 00:37:12.150069 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:12.150260 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:12.150282 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:12.150372 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:12.152341 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:12.154120 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:12.154996 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:12.155751 | orchestrator | 2025-05-30 00:37:12.156668 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-30 00:37:12.157015 | orchestrator | Friday 30 May 2025 00:37:12 +0000 (0:00:01.408) 0:07:15.447 ************ 2025-05-30 00:37:12.295352 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:37:12.369772 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:37:12.445082 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:37:12.517054 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:37:12.587299 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:37:12.695826 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:37:12.696353 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:37:12.697514 | orchestrator | 2025-05-30 00:37:12.698164 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-30 00:37:12.699242 | orchestrator | Friday 30 May 2025 00:37:12 +0000 (0:00:00.548) 0:07:15.995 ************ 2025-05-30 00:37:14.675011 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:14.675120 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:14.675698 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:14.676541 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:14.676957 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:14.677603 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:14.678145 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:14.679719 | orchestrator | 2025-05-30 00:37:14.680107 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-30 00:37:14.680568 | orchestrator | Friday 30 May 2025 00:37:14 +0000 (0:00:01.976) 0:07:17.972 ************ 2025-05-30 00:37:15.951899 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:15.952534 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:37:15.953417 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:37:15.954512 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:37:15.955107 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:37:15.955731 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:37:15.956704 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:37:15.957736 | orchestrator | 2025-05-30 00:37:15.958620 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-30 00:37:15.958974 | orchestrator | Friday 30 May 2025 00:37:15 +0000 (0:00:01.278) 0:07:19.251 ************ 2025-05-30 00:37:17.691303 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:17.691657 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:37:17.691700 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:37:17.691721 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:37:17.692046 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:37:17.692827 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:37:17.692918 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:37:17.693659 | orchestrator | 2025-05-30 00:37:17.694852 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-30 00:37:17.695017 | orchestrator | Friday 30 May 2025 00:37:17 +0000 (0:00:01.740) 0:07:20.991 ************ 2025-05-30 00:37:19.349545 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:19.349973 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:37:19.351196 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:37:19.352222 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:37:19.352602 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:37:19.355350 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:37:19.355707 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:37:19.359405 | orchestrator | 2025-05-30 00:37:19.360016 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-30 00:37:19.360505 | orchestrator | Friday 30 May 2025 00:37:19 +0000 (0:00:01.661) 0:07:22.652 ************ 2025-05-30 00:37:19.845913 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:20.268210 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:20.268792 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:20.269418 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:20.270186 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:20.270603 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:20.271285 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:20.272211 | orchestrator | 2025-05-30 00:37:20.272940 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-30 00:37:20.273940 | orchestrator | Friday 30 May 2025 00:37:20 +0000 (0:00:00.918) 0:07:23.570 ************ 2025-05-30 00:37:20.420573 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:37:20.482590 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:37:20.544899 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:37:20.600636 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:37:20.658400 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:37:21.012525 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:37:21.013689 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:37:21.014796 | orchestrator | 2025-05-30 00:37:21.015671 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-30 00:37:21.016340 | orchestrator | Friday 30 May 2025 00:37:21 +0000 (0:00:00.745) 0:07:24.315 ************ 2025-05-30 00:37:21.132859 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:37:21.188296 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:37:21.245447 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:37:21.309068 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:37:21.368598 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:37:21.464087 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:37:21.464188 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:37:21.465034 | orchestrator | 2025-05-30 00:37:21.466155 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-30 00:37:21.468106 | orchestrator | Friday 30 May 2025 00:37:21 +0000 (0:00:00.450) 0:07:24.766 ************ 2025-05-30 00:37:21.585076 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:21.658694 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:21.724239 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:21.782165 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:21.841166 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:21.933044 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:21.933758 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:21.935072 | orchestrator | 2025-05-30 00:37:21.936345 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-30 00:37:21.937014 | orchestrator | Friday 30 May 2025 00:37:21 +0000 (0:00:00.469) 0:07:25.235 ************ 2025-05-30 00:37:22.048582 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:22.111009 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:22.320910 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:22.380595 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:22.437467 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:22.548544 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:22.549639 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:22.550236 | orchestrator | 2025-05-30 00:37:22.551227 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-30 00:37:22.552213 | orchestrator | Friday 30 May 2025 00:37:22 +0000 (0:00:00.614) 0:07:25.850 ************ 2025-05-30 00:37:22.669613 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:22.734857 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:22.796463 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:22.853460 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:22.920030 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:23.018767 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:23.019658 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:23.020330 | orchestrator | 2025-05-30 00:37:23.021415 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-30 00:37:23.022365 | orchestrator | Friday 30 May 2025 00:37:23 +0000 (0:00:00.470) 0:07:26.320 ************ 2025-05-30 00:37:28.826204 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:28.826952 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:28.828162 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:28.828754 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:28.830096 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:28.830778 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:28.832125 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:28.832284 | orchestrator | 2025-05-30 00:37:28.833133 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-30 00:37:28.833592 | orchestrator | Friday 30 May 2025 00:37:28 +0000 (0:00:05.806) 0:07:32.127 ************ 2025-05-30 00:37:29.027162 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:37:29.098872 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:37:29.164069 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:37:29.228240 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:37:29.354594 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:37:29.355174 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:37:29.358082 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:37:29.358112 | orchestrator | 2025-05-30 00:37:29.359422 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-30 00:37:29.360331 | orchestrator | Friday 30 May 2025 00:37:29 +0000 (0:00:00.529) 0:07:32.656 ************ 2025-05-30 00:37:30.323611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:37:30.325938 | orchestrator | 2025-05-30 00:37:30.327352 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-30 00:37:30.328324 | orchestrator | Friday 30 May 2025 00:37:30 +0000 (0:00:00.965) 0:07:33.622 ************ 2025-05-30 00:37:32.180721 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:32.181851 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:32.182684 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:32.184757 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:32.185580 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:32.186227 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:32.188242 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:32.188270 | orchestrator | 2025-05-30 00:37:32.189058 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-30 00:37:32.189935 | orchestrator | Friday 30 May 2025 00:37:32 +0000 (0:00:01.859) 0:07:35.481 ************ 2025-05-30 00:37:33.270220 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:33.271770 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:33.272008 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:33.272436 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:33.273211 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:33.273630 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:33.274189 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:33.274692 | orchestrator | 2025-05-30 00:37:33.276627 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-30 00:37:33.276721 | orchestrator | Friday 30 May 2025 00:37:33 +0000 (0:00:01.086) 0:07:36.568 ************ 2025-05-30 00:37:33.674558 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:34.110174 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:34.112407 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:34.113612 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:34.113895 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:34.114673 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:34.115367 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:34.115858 | orchestrator | 2025-05-30 00:37:34.116179 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-30 00:37:34.116678 | orchestrator | Friday 30 May 2025 00:37:34 +0000 (0:00:00.845) 0:07:37.413 ************ 2025-05-30 00:37:35.978696 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-30 00:37:35.979157 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-30 00:37:35.980577 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-30 00:37:35.982206 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-30 00:37:35.983221 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-30 00:37:35.984107 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-30 00:37:35.985368 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-30 00:37:35.986075 | orchestrator | 2025-05-30 00:37:35.987342 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-30 00:37:35.988234 | orchestrator | Friday 30 May 2025 00:37:35 +0000 (0:00:01.864) 0:07:39.278 ************ 2025-05-30 00:37:36.758913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:37:36.759017 | orchestrator | 2025-05-30 00:37:36.760247 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-30 00:37:36.760289 | orchestrator | Friday 30 May 2025 00:37:36 +0000 (0:00:00.780) 0:07:40.059 ************ 2025-05-30 00:37:45.534776 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:37:45.534988 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:37:45.535869 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:37:45.536597 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:37:45.536924 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:37:45.537460 | orchestrator | changed: [testbed-manager] 2025-05-30 00:37:45.538090 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:37:45.539175 | orchestrator | 2025-05-30 00:37:45.539677 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-30 00:37:45.540306 | orchestrator | Friday 30 May 2025 00:37:45 +0000 (0:00:08.775) 0:07:48.834 ************ 2025-05-30 00:37:46.282960 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:47.585388 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:47.585816 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:47.590181 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:47.590844 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:47.591337 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:47.592157 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:47.593157 | orchestrator | 2025-05-30 00:37:47.594422 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-30 00:37:47.595202 | orchestrator | Friday 30 May 2025 00:37:47 +0000 (0:00:02.049) 0:07:50.883 ************ 2025-05-30 00:37:48.843363 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:48.843597 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:48.844191 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:48.845202 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:48.845386 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:48.846381 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:48.846584 | orchestrator | 2025-05-30 00:37:48.847286 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-30 00:37:48.848026 | orchestrator | Friday 30 May 2025 00:37:48 +0000 (0:00:01.260) 0:07:52.143 ************ 2025-05-30 00:37:50.300668 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:37:50.301799 | orchestrator | changed: [testbed-manager] 2025-05-30 00:37:50.301949 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:37:50.302864 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:37:50.303707 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:37:50.304706 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:37:50.305185 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:37:50.305755 | orchestrator | 2025-05-30 00:37:50.306293 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-30 00:37:50.307330 | orchestrator | 2025-05-30 00:37:50.308183 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-30 00:37:50.308216 | orchestrator | Friday 30 May 2025 00:37:50 +0000 (0:00:01.456) 0:07:53.600 ************ 2025-05-30 00:37:50.419955 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:37:50.506382 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:37:50.567046 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:37:50.623026 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:37:50.694466 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:37:50.836959 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:37:50.837769 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:37:50.839079 | orchestrator | 2025-05-30 00:37:50.839837 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-30 00:37:50.840376 | orchestrator | 2025-05-30 00:37:50.841153 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-30 00:37:50.841865 | orchestrator | Friday 30 May 2025 00:37:50 +0000 (0:00:00.538) 0:07:54.139 ************ 2025-05-30 00:37:52.160001 | orchestrator | changed: [testbed-manager] 2025-05-30 00:37:52.160112 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:37:52.160187 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:37:52.160438 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:37:52.161159 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:37:52.161650 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:37:52.162664 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:37:52.162733 | orchestrator | 2025-05-30 00:37:52.164545 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-30 00:37:52.165316 | orchestrator | Friday 30 May 2025 00:37:52 +0000 (0:00:01.320) 0:07:55.459 ************ 2025-05-30 00:37:53.582114 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:53.582659 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:53.583396 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:53.586573 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:53.586644 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:53.586653 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:53.587459 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:53.587774 | orchestrator | 2025-05-30 00:37:53.589057 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-30 00:37:53.591240 | orchestrator | Friday 30 May 2025 00:37:53 +0000 (0:00:01.424) 0:07:56.884 ************ 2025-05-30 00:37:53.708708 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:37:53.766406 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:37:53.829035 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:37:54.042255 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:37:54.101960 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:37:54.492574 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:37:54.493366 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:37:54.493398 | orchestrator | 2025-05-30 00:37:54.494709 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-30 00:37:54.495337 | orchestrator | Friday 30 May 2025 00:37:54 +0000 (0:00:00.907) 0:07:57.791 ************ 2025-05-30 00:37:55.736647 | orchestrator | changed: [testbed-manager] 2025-05-30 00:37:55.737971 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:37:55.739369 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:37:55.741108 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:37:55.744557 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:37:55.744606 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:37:55.744618 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:37:55.744630 | orchestrator | 2025-05-30 00:37:55.744643 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-30 00:37:55.744656 | orchestrator | 2025-05-30 00:37:55.745187 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-30 00:37:55.745709 | orchestrator | Friday 30 May 2025 00:37:55 +0000 (0:00:01.247) 0:07:59.039 ************ 2025-05-30 00:37:56.635434 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:37:56.635658 | orchestrator | 2025-05-30 00:37:56.636361 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-30 00:37:56.639645 | orchestrator | Friday 30 May 2025 00:37:56 +0000 (0:00:00.895) 0:07:59.934 ************ 2025-05-30 00:37:57.081863 | orchestrator | ok: [testbed-manager] 2025-05-30 00:37:57.142198 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:37:57.694468 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:37:57.694856 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:37:57.695342 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:37:57.696274 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:37:57.696545 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:37:57.697214 | orchestrator | 2025-05-30 00:37:57.699119 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-30 00:37:57.699817 | orchestrator | Friday 30 May 2025 00:37:57 +0000 (0:00:01.060) 0:08:00.995 ************ 2025-05-30 00:37:58.854256 | orchestrator | changed: [testbed-manager] 2025-05-30 00:37:58.855662 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:37:58.858700 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:37:58.860191 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:37:58.860819 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:37:58.861609 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:37:58.862486 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:37:58.863278 | orchestrator | 2025-05-30 00:37:58.863968 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-30 00:37:58.864628 | orchestrator | Friday 30 May 2025 00:37:58 +0000 (0:00:01.159) 0:08:02.154 ************ 2025-05-30 00:37:59.841865 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:37:59.842253 | orchestrator | 2025-05-30 00:37:59.843228 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-30 00:37:59.843954 | orchestrator | Friday 30 May 2025 00:37:59 +0000 (0:00:00.988) 0:08:03.143 ************ 2025-05-30 00:38:00.700735 | orchestrator | ok: [testbed-manager] 2025-05-30 00:38:00.701377 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:38:00.702376 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:38:00.703132 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:38:00.706465 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:38:00.706547 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:38:00.706651 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:38:00.707404 | orchestrator | 2025-05-30 00:38:00.708025 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-30 00:38:00.708445 | orchestrator | Friday 30 May 2025 00:38:00 +0000 (0:00:00.856) 0:08:04.000 ************ 2025-05-30 00:38:01.794717 | orchestrator | changed: [testbed-manager] 2025-05-30 00:38:01.795337 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:38:01.795649 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:38:01.796312 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:38:01.797773 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:38:01.798782 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:38:01.799243 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:38:01.802088 | orchestrator | 2025-05-30 00:38:01.803927 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:38:01.803995 | orchestrator | 2025-05-30 00:38:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:38:01.804017 | orchestrator | 2025-05-30 00:38:01 | INFO  | Please wait and do not abort execution. 2025-05-30 00:38:01.804632 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-30 00:38:01.805738 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-30 00:38:01.806538 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-30 00:38:01.807684 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-30 00:38:01.808114 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-30 00:38:01.810239 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-30 00:38:01.810822 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-30 00:38:01.811791 | orchestrator | 2025-05-30 00:38:01.812837 | orchestrator | Friday 30 May 2025 00:38:01 +0000 (0:00:01.095) 0:08:05.095 ************ 2025-05-30 00:38:01.813293 | orchestrator | =============================================================================== 2025-05-30 00:38:01.813697 | orchestrator | osism.commons.packages : Install required packages --------------------- 82.51s 2025-05-30 00:38:01.814478 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.71s 2025-05-30 00:38:01.814867 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.86s 2025-05-30 00:38:01.815385 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.45s 2025-05-30 00:38:01.815953 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.44s 2025-05-30 00:38:01.816714 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.39s 2025-05-30 00:38:01.817270 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.22s 2025-05-30 00:38:01.817619 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.02s 2025-05-30 00:38:01.818178 | orchestrator | osism.services.docker : Install apt-transport-https package ------------ 11.87s 2025-05-30 00:38:01.818615 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.75s 2025-05-30 00:38:01.819192 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.78s 2025-05-30 00:38:01.819377 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.44s 2025-05-30 00:38:01.820830 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.24s 2025-05-30 00:38:01.821000 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.11s 2025-05-30 00:38:01.821316 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.76s 2025-05-30 00:38:01.821818 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.55s 2025-05-30 00:38:01.822287 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 6.12s 2025-05-30 00:38:01.822734 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.96s 2025-05-30 00:38:01.823124 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.91s 2025-05-30 00:38:01.823616 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.85s 2025-05-30 00:38:02.728757 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-30 00:38:02.728875 | orchestrator | + osism apply network 2025-05-30 00:38:04.596004 | orchestrator | 2025-05-30 00:38:04 | INFO  | Task 4b882255-a602-4e44-a855-efe53868c03e (network) was prepared for execution. 2025-05-30 00:38:04.596109 | orchestrator | 2025-05-30 00:38:04 | INFO  | It takes a moment until task 4b882255-a602-4e44-a855-efe53868c03e (network) has been started and output is visible here. 2025-05-30 00:38:07.905572 | orchestrator | 2025-05-30 00:38:07.906162 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-30 00:38:07.910397 | orchestrator | 2025-05-30 00:38:07.910452 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-30 00:38:07.910472 | orchestrator | Friday 30 May 2025 00:38:07 +0000 (0:00:00.200) 0:00:00.200 ************ 2025-05-30 00:38:08.051566 | orchestrator | ok: [testbed-manager] 2025-05-30 00:38:08.130711 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:38:08.209043 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:38:08.283905 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:38:08.365613 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:38:08.599851 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:38:08.600300 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:38:08.601391 | orchestrator | 2025-05-30 00:38:08.602580 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-30 00:38:08.605705 | orchestrator | Friday 30 May 2025 00:38:08 +0000 (0:00:00.696) 0:00:00.896 ************ 2025-05-30 00:38:09.806272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:38:09.807805 | orchestrator | 2025-05-30 00:38:09.814288 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-30 00:38:09.814337 | orchestrator | Friday 30 May 2025 00:38:09 +0000 (0:00:01.203) 0:00:02.100 ************ 2025-05-30 00:38:11.695764 | orchestrator | ok: [testbed-manager] 2025-05-30 00:38:11.696271 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:38:11.697375 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:38:11.698615 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:38:11.699205 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:38:11.701192 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:38:11.702092 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:38:11.703023 | orchestrator | 2025-05-30 00:38:11.703934 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-30 00:38:11.704455 | orchestrator | Friday 30 May 2025 00:38:11 +0000 (0:00:01.887) 0:00:03.988 ************ 2025-05-30 00:38:13.372437 | orchestrator | ok: [testbed-manager] 2025-05-30 00:38:13.372738 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:38:13.374192 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:38:13.376603 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:38:13.376667 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:38:13.377027 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:38:13.377748 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:38:13.378537 | orchestrator | 2025-05-30 00:38:13.379180 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-30 00:38:13.379838 | orchestrator | Friday 30 May 2025 00:38:13 +0000 (0:00:01.676) 0:00:05.664 ************ 2025-05-30 00:38:13.871577 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-30 00:38:14.455975 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-30 00:38:14.456079 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-30 00:38:14.456535 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-30 00:38:14.456648 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-30 00:38:14.457311 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-30 00:38:14.460721 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-30 00:38:14.460770 | orchestrator | 2025-05-30 00:38:14.460784 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-30 00:38:14.460796 | orchestrator | Friday 30 May 2025 00:38:14 +0000 (0:00:01.085) 0:00:06.750 ************ 2025-05-30 00:38:16.070172 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-30 00:38:16.070725 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-30 00:38:16.071324 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-30 00:38:16.075285 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-30 00:38:16.075700 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-30 00:38:16.076244 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-30 00:38:16.076870 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-30 00:38:16.077426 | orchestrator | 2025-05-30 00:38:16.077754 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-30 00:38:16.079186 | orchestrator | Friday 30 May 2025 00:38:16 +0000 (0:00:01.616) 0:00:08.367 ************ 2025-05-30 00:38:17.699385 | orchestrator | changed: [testbed-manager] 2025-05-30 00:38:17.699850 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:38:17.701962 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:38:17.702153 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:38:17.702930 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:38:17.703333 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:38:17.704290 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:38:17.705136 | orchestrator | 2025-05-30 00:38:17.705661 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-30 00:38:17.706622 | orchestrator | Friday 30 May 2025 00:38:17 +0000 (0:00:01.625) 0:00:09.992 ************ 2025-05-30 00:38:18.274646 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-30 00:38:18.727967 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-30 00:38:18.728139 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-30 00:38:18.729572 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-30 00:38:18.729664 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-30 00:38:18.730411 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-30 00:38:18.731333 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-30 00:38:18.732267 | orchestrator | 2025-05-30 00:38:18.732905 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-30 00:38:18.733311 | orchestrator | Friday 30 May 2025 00:38:18 +0000 (0:00:01.031) 0:00:11.024 ************ 2025-05-30 00:38:19.169560 | orchestrator | ok: [testbed-manager] 2025-05-30 00:38:19.250735 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:38:19.846749 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:38:19.847307 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:38:19.851755 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:38:19.851835 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:38:19.851849 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:38:19.851861 | orchestrator | 2025-05-30 00:38:19.852224 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-30 00:38:19.852878 | orchestrator | Friday 30 May 2025 00:38:19 +0000 (0:00:01.116) 0:00:12.140 ************ 2025-05-30 00:38:20.015687 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:38:20.093384 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:38:20.171397 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:38:20.244089 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:38:20.321651 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:38:20.634381 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:38:20.634485 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:38:20.634684 | orchestrator | 2025-05-30 00:38:20.634942 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-30 00:38:20.635448 | orchestrator | Friday 30 May 2025 00:38:20 +0000 (0:00:00.789) 0:00:12.929 ************ 2025-05-30 00:38:22.647088 | orchestrator | ok: [testbed-manager] 2025-05-30 00:38:22.647658 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:38:22.649152 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:38:22.649181 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:38:22.651416 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:38:22.651777 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:38:22.652449 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:38:22.653011 | orchestrator | 2025-05-30 00:38:22.653828 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-30 00:38:22.654623 | orchestrator | Friday 30 May 2025 00:38:22 +0000 (0:00:02.013) 0:00:14.943 ************ 2025-05-30 00:38:24.477340 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-30 00:38:24.478130 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-30 00:38:24.478420 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-30 00:38:24.479270 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-30 00:38:24.479881 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-30 00:38:24.483587 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-30 00:38:24.483638 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-30 00:38:24.483651 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-30 00:38:24.483662 | orchestrator | 2025-05-30 00:38:24.483675 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-30 00:38:24.483687 | orchestrator | Friday 30 May 2025 00:38:24 +0000 (0:00:01.822) 0:00:16.766 ************ 2025-05-30 00:38:26.838420 | orchestrator | ok: [testbed-manager] 2025-05-30 00:38:26.838507 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:38:26.838626 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:38:26.839047 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:38:26.841866 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:38:26.843202 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:38:26.843649 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:38:26.844382 | orchestrator | 2025-05-30 00:38:26.845468 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-30 00:38:26.845595 | orchestrator | Friday 30 May 2025 00:38:26 +0000 (0:00:02.367) 0:00:19.133 ************ 2025-05-30 00:38:28.226985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:38:28.227734 | orchestrator | 2025-05-30 00:38:28.229207 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-30 00:38:28.230121 | orchestrator | Friday 30 May 2025 00:38:28 +0000 (0:00:01.387) 0:00:20.521 ************ 2025-05-30 00:38:28.744148 | orchestrator | ok: [testbed-manager] 2025-05-30 00:38:29.176731 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:38:29.176834 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:38:29.178581 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:38:29.179028 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:38:29.180192 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:38:29.180700 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:38:29.181386 | orchestrator | 2025-05-30 00:38:29.183449 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-30 00:38:29.183483 | orchestrator | Friday 30 May 2025 00:38:29 +0000 (0:00:00.950) 0:00:21.472 ************ 2025-05-30 00:38:29.334425 | orchestrator | ok: [testbed-manager] 2025-05-30 00:38:29.413319 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:38:29.648284 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:38:29.730175 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:38:29.814643 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:38:29.942501 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:38:29.942676 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:38:29.942974 | orchestrator | 2025-05-30 00:38:29.943951 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-30 00:38:29.944177 | orchestrator | Friday 30 May 2025 00:38:29 +0000 (0:00:00.764) 0:00:22.236 ************ 2025-05-30 00:38:30.289843 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-30 00:38:30.290008 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-30 00:38:30.452831 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-30 00:38:30.452915 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-30 00:38:30.897041 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-30 00:38:30.898411 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-30 00:38:30.899077 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-30 00:38:30.902421 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-30 00:38:30.902487 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-30 00:38:30.902501 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-30 00:38:30.902606 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-30 00:38:30.903108 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-30 00:38:30.903586 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-30 00:38:30.904083 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-30 00:38:30.904593 | orchestrator | 2025-05-30 00:38:30.905178 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-30 00:38:30.905601 | orchestrator | Friday 30 May 2025 00:38:30 +0000 (0:00:00.958) 0:00:23.195 ************ 2025-05-30 00:38:31.205310 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:38:31.284446 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:38:31.363620 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:38:31.441809 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:38:31.521936 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:38:32.690994 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:38:32.691136 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:38:32.694692 | orchestrator | 2025-05-30 00:38:32.694769 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-30 00:38:32.694814 | orchestrator | Friday 30 May 2025 00:38:32 +0000 (0:00:01.788) 0:00:24.984 ************ 2025-05-30 00:38:32.849298 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:38:32.931806 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:38:33.188196 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:38:33.271700 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:38:33.353729 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:38:33.395961 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:38:33.396081 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:38:33.396557 | orchestrator | 2025-05-30 00:38:33.397323 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:38:33.397759 | orchestrator | 2025-05-30 00:38:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:38:33.397888 | orchestrator | 2025-05-30 00:38:33 | INFO  | Please wait and do not abort execution. 2025-05-30 00:38:33.398912 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:38:33.399157 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:38:33.399690 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:38:33.400060 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:38:33.400477 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:38:33.401009 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:38:33.401315 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:38:33.402137 | orchestrator | 2025-05-30 00:38:33.402976 | orchestrator | Friday 30 May 2025 00:38:33 +0000 (0:00:00.708) 0:00:25.693 ************ 2025-05-30 00:38:33.403090 | orchestrator | =============================================================================== 2025-05-30 00:38:33.403206 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 2.37s 2025-05-30 00:38:33.403390 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.01s 2025-05-30 00:38:33.403838 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.89s 2025-05-30 00:38:33.404228 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.82s 2025-05-30 00:38:33.404674 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.79s 2025-05-30 00:38:33.405234 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.68s 2025-05-30 00:38:33.405496 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.63s 2025-05-30 00:38:33.405916 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.62s 2025-05-30 00:38:33.406301 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.39s 2025-05-30 00:38:33.406741 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.20s 2025-05-30 00:38:33.407305 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.12s 2025-05-30 00:38:33.408328 | orchestrator | osism.commons.network : Create required directories --------------------- 1.09s 2025-05-30 00:38:33.408860 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.03s 2025-05-30 00:38:33.409359 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 0.96s 2025-05-30 00:38:33.409886 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.95s 2025-05-30 00:38:33.410910 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.79s 2025-05-30 00:38:33.411068 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.76s 2025-05-30 00:38:33.411431 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.71s 2025-05-30 00:38:33.411819 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.70s 2025-05-30 00:38:33.922810 | orchestrator | + osism apply wireguard 2025-05-30 00:38:35.299786 | orchestrator | 2025-05-30 00:38:35 | INFO  | Task c7a60a62-6809-4e47-a66d-aa4f46891998 (wireguard) was prepared for execution. 2025-05-30 00:38:35.299898 | orchestrator | 2025-05-30 00:38:35 | INFO  | It takes a moment until task c7a60a62-6809-4e47-a66d-aa4f46891998 (wireguard) has been started and output is visible here. 2025-05-30 00:38:38.386751 | orchestrator | 2025-05-30 00:38:38.387605 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-30 00:38:38.388164 | orchestrator | 2025-05-30 00:38:38.389213 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-30 00:38:38.389665 | orchestrator | Friday 30 May 2025 00:38:38 +0000 (0:00:00.162) 0:00:00.163 ************ 2025-05-30 00:38:39.855092 | orchestrator | ok: [testbed-manager] 2025-05-30 00:38:39.855285 | orchestrator | 2025-05-30 00:38:39.856074 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-30 00:38:39.857062 | orchestrator | Friday 30 May 2025 00:38:39 +0000 (0:00:01.469) 0:00:01.632 ************ 2025-05-30 00:38:46.190146 | orchestrator | changed: [testbed-manager] 2025-05-30 00:38:46.191177 | orchestrator | 2025-05-30 00:38:46.191191 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-30 00:38:46.191745 | orchestrator | Friday 30 May 2025 00:38:46 +0000 (0:00:06.335) 0:00:07.967 ************ 2025-05-30 00:38:46.732806 | orchestrator | changed: [testbed-manager] 2025-05-30 00:38:46.732977 | orchestrator | 2025-05-30 00:38:46.733361 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-30 00:38:46.735648 | orchestrator | Friday 30 May 2025 00:38:46 +0000 (0:00:00.544) 0:00:08.511 ************ 2025-05-30 00:38:47.149700 | orchestrator | changed: [testbed-manager] 2025-05-30 00:38:47.149877 | orchestrator | 2025-05-30 00:38:47.150695 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-30 00:38:47.151432 | orchestrator | Friday 30 May 2025 00:38:47 +0000 (0:00:00.417) 0:00:08.929 ************ 2025-05-30 00:38:47.661461 | orchestrator | ok: [testbed-manager] 2025-05-30 00:38:47.661811 | orchestrator | 2025-05-30 00:38:47.663328 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-30 00:38:47.663701 | orchestrator | Friday 30 May 2025 00:38:47 +0000 (0:00:00.508) 0:00:09.437 ************ 2025-05-30 00:38:48.175717 | orchestrator | ok: [testbed-manager] 2025-05-30 00:38:48.176007 | orchestrator | 2025-05-30 00:38:48.177117 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-30 00:38:48.177609 | orchestrator | Friday 30 May 2025 00:38:48 +0000 (0:00:00.517) 0:00:09.955 ************ 2025-05-30 00:38:48.569924 | orchestrator | ok: [testbed-manager] 2025-05-30 00:38:48.570332 | orchestrator | 2025-05-30 00:38:48.570737 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-30 00:38:48.570976 | orchestrator | Friday 30 May 2025 00:38:48 +0000 (0:00:00.393) 0:00:10.348 ************ 2025-05-30 00:38:49.728955 | orchestrator | changed: [testbed-manager] 2025-05-30 00:38:49.729123 | orchestrator | 2025-05-30 00:38:49.729227 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-30 00:38:49.729688 | orchestrator | Friday 30 May 2025 00:38:49 +0000 (0:00:01.158) 0:00:11.507 ************ 2025-05-30 00:38:50.610413 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-30 00:38:50.610582 | orchestrator | changed: [testbed-manager] 2025-05-30 00:38:50.611845 | orchestrator | 2025-05-30 00:38:50.611874 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-30 00:38:50.612485 | orchestrator | Friday 30 May 2025 00:38:50 +0000 (0:00:00.880) 0:00:12.388 ************ 2025-05-30 00:38:52.277273 | orchestrator | changed: [testbed-manager] 2025-05-30 00:38:52.277458 | orchestrator | 2025-05-30 00:38:52.281739 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-30 00:38:52.282794 | orchestrator | Friday 30 May 2025 00:38:52 +0000 (0:00:01.667) 0:00:14.055 ************ 2025-05-30 00:38:53.173326 | orchestrator | changed: [testbed-manager] 2025-05-30 00:38:53.173511 | orchestrator | 2025-05-30 00:38:53.174264 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:38:53.175267 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:38:53.175400 | orchestrator | 2025-05-30 00:38:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:38:53.175418 | orchestrator | 2025-05-30 00:38:53 | INFO  | Please wait and do not abort execution. 2025-05-30 00:38:53.175967 | orchestrator | 2025-05-30 00:38:53.176512 | orchestrator | Friday 30 May 2025 00:38:53 +0000 (0:00:00.897) 0:00:14.953 ************ 2025-05-30 00:38:53.177070 | orchestrator | =============================================================================== 2025-05-30 00:38:53.177576 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.34s 2025-05-30 00:38:53.178126 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.67s 2025-05-30 00:38:53.178744 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.47s 2025-05-30 00:38:53.179093 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.16s 2025-05-30 00:38:53.179605 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2025-05-30 00:38:53.180680 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.88s 2025-05-30 00:38:53.181238 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-05-30 00:38:53.181708 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.52s 2025-05-30 00:38:53.182184 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.51s 2025-05-30 00:38:53.182649 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-05-30 00:38:53.183127 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.39s 2025-05-30 00:38:53.697965 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-30 00:38:53.729755 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-30 00:38:53.729841 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-30 00:38:53.824515 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 158 0 --:--:-- --:--:-- --:--:-- 159 2025-05-30 00:38:53.841211 | orchestrator | + osism apply --environment custom workarounds 2025-05-30 00:38:55.230441 | orchestrator | 2025-05-30 00:38:55 | INFO  | Trying to run play workarounds in environment custom 2025-05-30 00:38:55.278475 | orchestrator | 2025-05-30 00:38:55 | INFO  | Task b3cfb790-fe95-465e-bcea-f6a04311002c (workarounds) was prepared for execution. 2025-05-30 00:38:55.278603 | orchestrator | 2025-05-30 00:38:55 | INFO  | It takes a moment until task b3cfb790-fe95-465e-bcea-f6a04311002c (workarounds) has been started and output is visible here. 2025-05-30 00:38:58.344751 | orchestrator | 2025-05-30 00:38:58.344871 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 00:38:58.344889 | orchestrator | 2025-05-30 00:38:58.346007 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-30 00:38:58.346754 | orchestrator | Friday 30 May 2025 00:38:58 +0000 (0:00:00.138) 0:00:00.138 ************ 2025-05-30 00:38:58.512983 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-30 00:38:58.592731 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-30 00:38:58.675277 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-30 00:38:58.758630 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-30 00:38:58.840857 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-30 00:38:59.076975 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-30 00:38:59.077084 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-30 00:38:59.078005 | orchestrator | 2025-05-30 00:38:59.078236 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-30 00:38:59.080002 | orchestrator | 2025-05-30 00:38:59.080231 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-30 00:38:59.080653 | orchestrator | Friday 30 May 2025 00:38:59 +0000 (0:00:00.736) 0:00:00.875 ************ 2025-05-30 00:39:01.610263 | orchestrator | ok: [testbed-manager] 2025-05-30 00:39:01.614365 | orchestrator | 2025-05-30 00:39:01.614599 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-30 00:39:01.616266 | orchestrator | 2025-05-30 00:39:01.617823 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-30 00:39:01.621380 | orchestrator | Friday 30 May 2025 00:39:01 +0000 (0:00:02.528) 0:00:03.403 ************ 2025-05-30 00:39:03.420678 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:39:03.420910 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:39:03.424882 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:39:03.425673 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:39:03.426210 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:39:03.426918 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:39:03.427348 | orchestrator | 2025-05-30 00:39:03.428081 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-30 00:39:03.429277 | orchestrator | 2025-05-30 00:39:03.429389 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-30 00:39:03.430948 | orchestrator | Friday 30 May 2025 00:39:03 +0000 (0:00:01.811) 0:00:05.214 ************ 2025-05-30 00:39:04.851881 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-30 00:39:04.852497 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-30 00:39:04.853297 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-30 00:39:04.859194 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-30 00:39:04.859924 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-30 00:39:04.863889 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-30 00:39:04.866015 | orchestrator | 2025-05-30 00:39:04.867894 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-30 00:39:04.868448 | orchestrator | Friday 30 May 2025 00:39:04 +0000 (0:00:01.431) 0:00:06.646 ************ 2025-05-30 00:39:08.553929 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:39:08.554161 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:39:08.554357 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:39:08.554561 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:39:08.555017 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:39:08.555232 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:39:08.557241 | orchestrator | 2025-05-30 00:39:08.557580 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-30 00:39:08.557889 | orchestrator | Friday 30 May 2025 00:39:08 +0000 (0:00:03.706) 0:00:10.352 ************ 2025-05-30 00:39:08.713697 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:39:08.797835 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:39:08.874593 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:39:09.088239 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:39:09.220987 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:39:09.221785 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:39:09.225257 | orchestrator | 2025-05-30 00:39:09.225309 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-30 00:39:09.225322 | orchestrator | 2025-05-30 00:39:09.225334 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-30 00:39:09.225410 | orchestrator | Friday 30 May 2025 00:39:09 +0000 (0:00:00.663) 0:00:11.016 ************ 2025-05-30 00:39:10.917643 | orchestrator | changed: [testbed-manager] 2025-05-30 00:39:10.918163 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:39:10.918821 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:39:10.919438 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:39:10.920188 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:39:10.922115 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:39:10.922141 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:39:10.923039 | orchestrator | 2025-05-30 00:39:10.923452 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-30 00:39:10.924779 | orchestrator | Friday 30 May 2025 00:39:10 +0000 (0:00:01.698) 0:00:12.715 ************ 2025-05-30 00:39:12.527598 | orchestrator | changed: [testbed-manager] 2025-05-30 00:39:12.527747 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:39:12.532891 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:39:12.532930 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:39:12.533609 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:39:12.534433 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:39:12.535441 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:39:12.536175 | orchestrator | 2025-05-30 00:39:12.537039 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-30 00:39:12.537604 | orchestrator | Friday 30 May 2025 00:39:12 +0000 (0:00:01.605) 0:00:14.320 ************ 2025-05-30 00:39:13.993869 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:39:13.996051 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:39:14.000060 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:39:14.001244 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:39:14.002718 | orchestrator | ok: [testbed-manager] 2025-05-30 00:39:14.003380 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:39:14.004131 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:39:14.006438 | orchestrator | 2025-05-30 00:39:14.007259 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-30 00:39:14.008012 | orchestrator | Friday 30 May 2025 00:39:13 +0000 (0:00:01.471) 0:00:15.792 ************ 2025-05-30 00:39:15.700919 | orchestrator | changed: [testbed-manager] 2025-05-30 00:39:15.702198 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:39:15.704203 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:39:15.705780 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:39:15.706616 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:39:15.707296 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:39:15.708359 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:39:15.709518 | orchestrator | 2025-05-30 00:39:15.710582 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-30 00:39:15.711867 | orchestrator | Friday 30 May 2025 00:39:15 +0000 (0:00:01.705) 0:00:17.497 ************ 2025-05-30 00:39:15.852010 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:39:15.941637 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:39:16.013920 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:39:16.087037 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:39:16.321158 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:39:16.473070 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:39:16.473299 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:39:16.474242 | orchestrator | 2025-05-30 00:39:16.475039 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-30 00:39:16.475421 | orchestrator | 2025-05-30 00:39:16.475931 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-30 00:39:16.476383 | orchestrator | Friday 30 May 2025 00:39:16 +0000 (0:00:00.774) 0:00:18.271 ************ 2025-05-30 00:39:18.930816 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:39:18.931645 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:39:18.932157 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:39:18.933396 | orchestrator | ok: [testbed-manager] 2025-05-30 00:39:18.933850 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:39:18.935464 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:39:18.936583 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:39:18.937262 | orchestrator | 2025-05-30 00:39:18.937882 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:39:18.938200 | orchestrator | 2025-05-30 00:39:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:39:18.938429 | orchestrator | 2025-05-30 00:39:18 | INFO  | Please wait and do not abort execution. 2025-05-30 00:39:18.939205 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:39:18.939666 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:39:18.940269 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:39:18.940687 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:39:18.941196 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:39:18.941669 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:39:18.942121 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:39:18.942481 | orchestrator | 2025-05-30 00:39:18.942955 | orchestrator | Friday 30 May 2025 00:39:18 +0000 (0:00:02.456) 0:00:20.727 ************ 2025-05-30 00:39:18.943316 | orchestrator | =============================================================================== 2025-05-30 00:39:18.943690 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.71s 2025-05-30 00:39:18.943938 | orchestrator | Apply netplan configuration --------------------------------------------- 2.53s 2025-05-30 00:39:18.944436 | orchestrator | Install python3-docker -------------------------------------------------- 2.46s 2025-05-30 00:39:18.944979 | orchestrator | Apply netplan configuration --------------------------------------------- 1.81s 2025-05-30 00:39:18.945248 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.71s 2025-05-30 00:39:18.945561 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.70s 2025-05-30 00:39:18.945996 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.61s 2025-05-30 00:39:18.946343 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.47s 2025-05-30 00:39:18.946755 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.43s 2025-05-30 00:39:18.947085 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.77s 2025-05-30 00:39:18.947504 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.74s 2025-05-30 00:39:18.947867 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.66s 2025-05-30 00:39:19.473616 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-30 00:39:20.877624 | orchestrator | 2025-05-30 00:39:20 | INFO  | Task 33eb0adc-976d-4fed-b13d-284d87ee50a6 (reboot) was prepared for execution. 2025-05-30 00:39:20.877731 | orchestrator | 2025-05-30 00:39:20 | INFO  | It takes a moment until task 33eb0adc-976d-4fed-b13d-284d87ee50a6 (reboot) has been started and output is visible here. 2025-05-30 00:39:23.907029 | orchestrator | 2025-05-30 00:39:23.907177 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-30 00:39:23.908180 | orchestrator | 2025-05-30 00:39:23.908944 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-30 00:39:23.908969 | orchestrator | Friday 30 May 2025 00:39:23 +0000 (0:00:00.143) 0:00:00.143 ************ 2025-05-30 00:39:23.998687 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:39:23.998817 | orchestrator | 2025-05-30 00:39:24.000907 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-30 00:39:24.002067 | orchestrator | Friday 30 May 2025 00:39:23 +0000 (0:00:00.093) 0:00:00.237 ************ 2025-05-30 00:39:24.877199 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:39:24.878483 | orchestrator | 2025-05-30 00:39:24.878706 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-30 00:39:24.879348 | orchestrator | Friday 30 May 2025 00:39:24 +0000 (0:00:00.876) 0:00:01.114 ************ 2025-05-30 00:39:24.990591 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:39:24.990752 | orchestrator | 2025-05-30 00:39:24.992595 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-30 00:39:24.993506 | orchestrator | 2025-05-30 00:39:24.995503 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-30 00:39:24.996732 | orchestrator | Friday 30 May 2025 00:39:24 +0000 (0:00:00.111) 0:00:01.226 ************ 2025-05-30 00:39:25.092578 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:39:25.092892 | orchestrator | 2025-05-30 00:39:25.093347 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-30 00:39:25.095516 | orchestrator | Friday 30 May 2025 00:39:25 +0000 (0:00:00.105) 0:00:01.331 ************ 2025-05-30 00:39:25.742300 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:39:25.742527 | orchestrator | 2025-05-30 00:39:25.743808 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-30 00:39:25.744857 | orchestrator | Friday 30 May 2025 00:39:25 +0000 (0:00:00.648) 0:00:01.980 ************ 2025-05-30 00:39:25.874318 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:39:25.875001 | orchestrator | 2025-05-30 00:39:25.875882 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-30 00:39:25.876458 | orchestrator | 2025-05-30 00:39:25.877178 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-30 00:39:25.877678 | orchestrator | Friday 30 May 2025 00:39:25 +0000 (0:00:00.131) 0:00:02.111 ************ 2025-05-30 00:39:25.977667 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:39:25.979027 | orchestrator | 2025-05-30 00:39:25.979475 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-30 00:39:25.980562 | orchestrator | Friday 30 May 2025 00:39:25 +0000 (0:00:00.103) 0:00:02.214 ************ 2025-05-30 00:39:26.754954 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:39:26.757766 | orchestrator | 2025-05-30 00:39:26.758211 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-30 00:39:26.758789 | orchestrator | Friday 30 May 2025 00:39:26 +0000 (0:00:00.778) 0:00:02.993 ************ 2025-05-30 00:39:26.853569 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:39:26.854163 | orchestrator | 2025-05-30 00:39:26.855123 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-30 00:39:26.856739 | orchestrator | 2025-05-30 00:39:26.858806 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-30 00:39:26.860047 | orchestrator | Friday 30 May 2025 00:39:26 +0000 (0:00:00.096) 0:00:03.089 ************ 2025-05-30 00:39:26.947414 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:39:26.948646 | orchestrator | 2025-05-30 00:39:26.950331 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-30 00:39:26.950485 | orchestrator | Friday 30 May 2025 00:39:26 +0000 (0:00:00.096) 0:00:03.186 ************ 2025-05-30 00:39:27.632618 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:39:27.632724 | orchestrator | 2025-05-30 00:39:27.633746 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-30 00:39:27.634740 | orchestrator | Friday 30 May 2025 00:39:27 +0000 (0:00:00.684) 0:00:03.871 ************ 2025-05-30 00:39:27.745224 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:39:27.745332 | orchestrator | 2025-05-30 00:39:27.749268 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-30 00:39:27.749304 | orchestrator | 2025-05-30 00:39:27.750141 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-30 00:39:27.750214 | orchestrator | Friday 30 May 2025 00:39:27 +0000 (0:00:00.110) 0:00:03.981 ************ 2025-05-30 00:39:27.844124 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:39:27.844412 | orchestrator | 2025-05-30 00:39:27.844760 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-30 00:39:27.845173 | orchestrator | Friday 30 May 2025 00:39:27 +0000 (0:00:00.101) 0:00:04.083 ************ 2025-05-30 00:39:28.542712 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:39:28.544382 | orchestrator | 2025-05-30 00:39:28.544914 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-30 00:39:28.545698 | orchestrator | Friday 30 May 2025 00:39:28 +0000 (0:00:00.694) 0:00:04.778 ************ 2025-05-30 00:39:28.645891 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:39:28.646939 | orchestrator | 2025-05-30 00:39:28.647291 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-30 00:39:28.648200 | orchestrator | 2025-05-30 00:39:28.648293 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-30 00:39:28.649142 | orchestrator | Friday 30 May 2025 00:39:28 +0000 (0:00:00.103) 0:00:04.882 ************ 2025-05-30 00:39:28.759699 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:39:28.760727 | orchestrator | 2025-05-30 00:39:28.761481 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-30 00:39:28.764975 | orchestrator | Friday 30 May 2025 00:39:28 +0000 (0:00:00.116) 0:00:04.998 ************ 2025-05-30 00:39:29.445598 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:39:29.446582 | orchestrator | 2025-05-30 00:39:29.446657 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-30 00:39:29.447747 | orchestrator | Friday 30 May 2025 00:39:29 +0000 (0:00:00.684) 0:00:05.682 ************ 2025-05-30 00:39:29.472513 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:39:29.473089 | orchestrator | 2025-05-30 00:39:29.474255 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:39:29.474297 | orchestrator | 2025-05-30 00:39:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:39:29.474311 | orchestrator | 2025-05-30 00:39:29 | INFO  | Please wait and do not abort execution. 2025-05-30 00:39:29.475318 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:39:29.476161 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:39:29.477033 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:39:29.477662 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:39:29.478619 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:39:29.479268 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:39:29.479920 | orchestrator | 2025-05-30 00:39:29.480347 | orchestrator | Friday 30 May 2025 00:39:29 +0000 (0:00:00.030) 0:00:05.712 ************ 2025-05-30 00:39:29.481034 | orchestrator | =============================================================================== 2025-05-30 00:39:29.481432 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.37s 2025-05-30 00:39:29.482109 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.62s 2025-05-30 00:39:29.482754 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.58s 2025-05-30 00:39:29.966475 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-30 00:39:31.417741 | orchestrator | 2025-05-30 00:39:31 | INFO  | Task 0f50bc6d-564f-4edd-a8f4-f035a9b5f79f (wait-for-connection) was prepared for execution. 2025-05-30 00:39:31.417949 | orchestrator | 2025-05-30 00:39:31 | INFO  | It takes a moment until task 0f50bc6d-564f-4edd-a8f4-f035a9b5f79f (wait-for-connection) has been started and output is visible here. 2025-05-30 00:39:34.476209 | orchestrator | 2025-05-30 00:39:34.477388 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-30 00:39:34.480853 | orchestrator | 2025-05-30 00:39:34.481408 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-30 00:39:34.482260 | orchestrator | Friday 30 May 2025 00:39:34 +0000 (0:00:00.167) 0:00:00.167 ************ 2025-05-30 00:39:48.503310 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:39:48.503469 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:39:48.503497 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:39:48.503699 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:39:48.504729 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:39:48.505688 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:39:48.506334 | orchestrator | 2025-05-30 00:39:48.506795 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:39:48.507294 | orchestrator | 2025-05-30 00:39:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:39:48.507459 | orchestrator | 2025-05-30 00:39:48 | INFO  | Please wait and do not abort execution. 2025-05-30 00:39:48.508195 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:39:48.509002 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:39:48.509394 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:39:48.509871 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:39:48.510373 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:39:48.510974 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:39:48.511420 | orchestrator | 2025-05-30 00:39:48.511810 | orchestrator | Friday 30 May 2025 00:39:48 +0000 (0:00:14.022) 0:00:14.190 ************ 2025-05-30 00:39:48.512819 | orchestrator | =============================================================================== 2025-05-30 00:39:48.512858 | orchestrator | Wait until remote system is reachable ---------------------------------- 14.02s 2025-05-30 00:39:49.087929 | orchestrator | + osism apply hddtemp 2025-05-30 00:39:50.537514 | orchestrator | 2025-05-30 00:39:50 | INFO  | Task ce229994-4518-405f-a3aa-27f600f6ddca (hddtemp) was prepared for execution. 2025-05-30 00:39:50.537707 | orchestrator | 2025-05-30 00:39:50 | INFO  | It takes a moment until task ce229994-4518-405f-a3aa-27f600f6ddca (hddtemp) has been started and output is visible here. 2025-05-30 00:39:53.703705 | orchestrator | 2025-05-30 00:39:53.704140 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-30 00:39:53.705022 | orchestrator | 2025-05-30 00:39:53.705809 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-30 00:39:53.707025 | orchestrator | Friday 30 May 2025 00:39:53 +0000 (0:00:00.193) 0:00:00.193 ************ 2025-05-30 00:39:53.841284 | orchestrator | ok: [testbed-manager] 2025-05-30 00:39:53.914246 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:39:53.989857 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:39:54.077367 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:39:54.152405 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:39:54.378503 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:39:54.380025 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:39:54.383133 | orchestrator | 2025-05-30 00:39:54.383175 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-30 00:39:54.383190 | orchestrator | Friday 30 May 2025 00:39:54 +0000 (0:00:00.678) 0:00:00.872 ************ 2025-05-30 00:39:55.537071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:39:55.537239 | orchestrator | 2025-05-30 00:39:55.541203 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-30 00:39:55.541228 | orchestrator | Friday 30 May 2025 00:39:55 +0000 (0:00:01.157) 0:00:02.029 ************ 2025-05-30 00:39:57.481850 | orchestrator | ok: [testbed-manager] 2025-05-30 00:39:57.481962 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:39:57.483176 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:39:57.489891 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:39:57.489934 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:39:57.489975 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:39:57.489987 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:39:57.489998 | orchestrator | 2025-05-30 00:39:57.490010 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-30 00:39:57.490072 | orchestrator | Friday 30 May 2025 00:39:57 +0000 (0:00:01.947) 0:00:03.976 ************ 2025-05-30 00:39:58.086181 | orchestrator | changed: [testbed-manager] 2025-05-30 00:39:58.173021 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:39:58.613997 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:39:58.614162 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:39:58.615128 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:39:58.619083 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:39:58.619438 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:39:58.620236 | orchestrator | 2025-05-30 00:39:58.620689 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-30 00:39:58.621313 | orchestrator | Friday 30 May 2025 00:39:58 +0000 (0:00:01.128) 0:00:05.105 ************ 2025-05-30 00:40:00.802829 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:40:00.805580 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:40:00.805616 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:40:00.806352 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:40:00.806850 | orchestrator | ok: [testbed-manager] 2025-05-30 00:40:00.807307 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:40:00.807917 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:40:00.808349 | orchestrator | 2025-05-30 00:40:00.808776 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-30 00:40:00.809212 | orchestrator | Friday 30 May 2025 00:40:00 +0000 (0:00:02.177) 0:00:07.283 ************ 2025-05-30 00:40:01.052330 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:40:01.141101 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:40:01.225190 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:40:01.306888 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:40:01.445655 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:40:01.446643 | orchestrator | changed: [testbed-manager] 2025-05-30 00:40:01.447003 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:40:01.448235 | orchestrator | 2025-05-30 00:40:01.449154 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-30 00:40:01.449964 | orchestrator | Friday 30 May 2025 00:40:01 +0000 (0:00:00.656) 0:00:07.939 ************ 2025-05-30 00:40:14.395804 | orchestrator | changed: [testbed-manager] 2025-05-30 00:40:14.395947 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:40:14.395971 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:40:14.395990 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:40:14.396006 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:40:14.396020 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:40:14.396112 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:40:14.396593 | orchestrator | 2025-05-30 00:40:14.397363 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-30 00:40:14.398356 | orchestrator | Friday 30 May 2025 00:40:14 +0000 (0:00:12.939) 0:00:20.879 ************ 2025-05-30 00:40:15.573140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:40:15.573232 | orchestrator | 2025-05-30 00:40:15.574474 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-30 00:40:15.575021 | orchestrator | Friday 30 May 2025 00:40:15 +0000 (0:00:01.184) 0:00:22.064 ************ 2025-05-30 00:40:17.478139 | orchestrator | changed: [testbed-manager] 2025-05-30 00:40:17.479104 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:40:17.480474 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:40:17.481459 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:40:17.482477 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:40:17.484837 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:40:17.486108 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:40:17.487219 | orchestrator | 2025-05-30 00:40:17.488981 | orchestrator | 2025-05-30 00:40:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:40:17.489027 | orchestrator | 2025-05-30 00:40:17 | INFO  | Please wait and do not abort execution. 2025-05-30 00:40:17.489397 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:40:17.490304 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:40:17.491128 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:40:17.492006 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:40:17.492401 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:40:17.493230 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:40:17.493706 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:40:17.494274 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:40:17.495072 | orchestrator | 2025-05-30 00:40:17.495646 | orchestrator | Friday 30 May 2025 00:40:17 +0000 (0:00:01.910) 0:00:23.974 ************ 2025-05-30 00:40:17.496320 | orchestrator | =============================================================================== 2025-05-30 00:40:17.496852 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.94s 2025-05-30 00:40:17.497383 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.18s 2025-05-30 00:40:17.498119 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.95s 2025-05-30 00:40:17.498775 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.91s 2025-05-30 00:40:17.499333 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.18s 2025-05-30 00:40:17.499728 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.16s 2025-05-30 00:40:17.500171 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.13s 2025-05-30 00:40:17.500618 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2025-05-30 00:40:17.501149 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.66s 2025-05-30 00:40:18.034380 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-30 00:40:19.507949 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-30 00:40:19.508080 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-30 00:40:19.508098 | orchestrator | + local max_attempts=60 2025-05-30 00:40:19.508111 | orchestrator | + local name=ceph-ansible 2025-05-30 00:40:19.508122 | orchestrator | + local attempt_num=1 2025-05-30 00:40:19.508904 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-30 00:40:19.539077 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-30 00:40:19.539161 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-30 00:40:19.539174 | orchestrator | + local max_attempts=60 2025-05-30 00:40:19.539185 | orchestrator | + local name=kolla-ansible 2025-05-30 00:40:19.539196 | orchestrator | + local attempt_num=1 2025-05-30 00:40:19.539827 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-30 00:40:19.567270 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-30 00:40:19.567375 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-30 00:40:19.567429 | orchestrator | + local max_attempts=60 2025-05-30 00:40:19.567447 | orchestrator | + local name=osism-ansible 2025-05-30 00:40:19.567464 | orchestrator | + local attempt_num=1 2025-05-30 00:40:19.567699 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-30 00:40:19.597251 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-30 00:40:19.597356 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-30 00:40:19.597370 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-30 00:40:19.759552 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-30 00:40:19.930908 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-30 00:40:20.090308 | orchestrator | ARA in osism-ansible already disabled. 2025-05-30 00:40:20.256420 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-30 00:40:20.257605 | orchestrator | + osism apply gather-facts 2025-05-30 00:40:21.659501 | orchestrator | 2025-05-30 00:40:21 | INFO  | Task 492ae2f9-f1dc-4e39-9a70-66e128c13bec (gather-facts) was prepared for execution. 2025-05-30 00:40:21.659654 | orchestrator | 2025-05-30 00:40:21 | INFO  | It takes a moment until task 492ae2f9-f1dc-4e39-9a70-66e128c13bec (gather-facts) has been started and output is visible here. 2025-05-30 00:40:24.829845 | orchestrator | 2025-05-30 00:40:24.830902 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-30 00:40:24.833789 | orchestrator | 2025-05-30 00:40:24.833821 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-30 00:40:24.834101 | orchestrator | Friday 30 May 2025 00:40:24 +0000 (0:00:00.174) 0:00:00.174 ************ 2025-05-30 00:40:29.809998 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:40:29.810195 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:40:29.811408 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:40:29.812469 | orchestrator | ok: [testbed-manager] 2025-05-30 00:40:29.813083 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:40:29.813891 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:40:29.814438 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:40:29.818080 | orchestrator | 2025-05-30 00:40:29.818135 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-30 00:40:29.818148 | orchestrator | 2025-05-30 00:40:29.818160 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-30 00:40:29.818172 | orchestrator | Friday 30 May 2025 00:40:29 +0000 (0:00:04.982) 0:00:05.157 ************ 2025-05-30 00:40:29.978179 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:40:30.045932 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:40:30.119821 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:40:30.195440 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:40:30.273493 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:40:30.305811 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:40:30.305971 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:40:30.306172 | orchestrator | 2025-05-30 00:40:30.306864 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:40:30.307179 | orchestrator | 2025-05-30 00:40:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:40:30.307200 | orchestrator | 2025-05-30 00:40:30 | INFO  | Please wait and do not abort execution. 2025-05-30 00:40:30.307792 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:40:30.308104 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:40:30.308723 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:40:30.309184 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:40:30.309552 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:40:30.309947 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:40:30.310297 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 00:40:30.310735 | orchestrator | 2025-05-30 00:40:30.310942 | orchestrator | Friday 30 May 2025 00:40:30 +0000 (0:00:00.497) 0:00:05.654 ************ 2025-05-30 00:40:30.311179 | orchestrator | =============================================================================== 2025-05-30 00:40:30.311460 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.98s 2025-05-30 00:40:30.311798 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-05-30 00:40:30.852638 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-30 00:40:30.868923 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-30 00:40:30.882247 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-30 00:40:30.892660 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-30 00:40:30.906831 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-30 00:40:30.918233 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-30 00:40:30.929476 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-30 00:40:30.948475 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-30 00:40:30.963664 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-30 00:40:30.982339 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-30 00:40:30.997406 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-30 00:40:31.010168 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-30 00:40:31.027357 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-30 00:40:31.040921 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-30 00:40:31.055465 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-30 00:40:31.067283 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-30 00:40:31.080062 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-30 00:40:31.095279 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-30 00:40:31.112660 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-30 00:40:31.126930 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-30 00:40:31.138773 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-30 00:40:31.583879 | orchestrator | ok: Runtime: 0:25:45.473031 2025-05-30 00:40:31.680744 | 2025-05-30 00:40:31.680879 | TASK [Deploy services] 2025-05-30 00:40:32.214921 | orchestrator | skipping: Conditional result was False 2025-05-30 00:40:32.224192 | 2025-05-30 00:40:32.224318 | TASK [Deploy in a nutshell] 2025-05-30 00:40:32.923764 | orchestrator | + set -e 2025-05-30 00:40:32.924087 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-30 00:40:32.924137 | orchestrator | ++ export INTERACTIVE=false 2025-05-30 00:40:32.924160 | orchestrator | ++ INTERACTIVE=false 2025-05-30 00:40:32.924175 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-30 00:40:32.924188 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-30 00:40:32.924219 | orchestrator | + source /opt/manager-vars.sh 2025-05-30 00:40:32.924266 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-30 00:40:32.924295 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-30 00:40:32.924309 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-30 00:40:32.924325 | orchestrator | ++ CEPH_VERSION=reef 2025-05-30 00:40:32.924338 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-30 00:40:32.924356 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-30 00:40:32.924367 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-30 00:40:32.924388 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-30 00:40:32.924399 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-30 00:40:32.924414 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-30 00:40:32.924425 | orchestrator | ++ export ARA=false 2025-05-30 00:40:32.924437 | orchestrator | ++ ARA=false 2025-05-30 00:40:32.924448 | orchestrator | ++ export TEMPEST=false 2025-05-30 00:40:32.924460 | orchestrator | ++ TEMPEST=false 2025-05-30 00:40:32.924472 | orchestrator | ++ export IS_ZUUL=true 2025-05-30 00:40:32.924482 | orchestrator | ++ IS_ZUUL=true 2025-05-30 00:40:32.924494 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-05-30 00:40:32.924505 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.162 2025-05-30 00:40:32.924516 | orchestrator | ++ export EXTERNAL_API=false 2025-05-30 00:40:32.924527 | orchestrator | ++ EXTERNAL_API=false 2025-05-30 00:40:32.924538 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-30 00:40:32.924549 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-30 00:40:32.924588 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-30 00:40:32.924609 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-30 00:40:32.924628 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-30 00:40:32.924647 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-30 00:40:32.924665 | orchestrator | 2025-05-30 00:40:32.924683 | orchestrator | # PULL IMAGES 2025-05-30 00:40:32.924695 | orchestrator | 2025-05-30 00:40:32.924706 | orchestrator | + echo 2025-05-30 00:40:32.924717 | orchestrator | + echo '# PULL IMAGES' 2025-05-30 00:40:32.924728 | orchestrator | + echo 2025-05-30 00:40:32.925646 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-30 00:40:32.983698 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-30 00:40:32.983795 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-30 00:40:34.336853 | orchestrator | 2025-05-30 00:40:34 | INFO  | Trying to run play pull-images in environment custom 2025-05-30 00:40:34.383499 | orchestrator | 2025-05-30 00:40:34 | INFO  | Task d326b310-34d4-4912-8d47-709ef4f1fe81 (pull-images) was prepared for execution. 2025-05-30 00:40:34.383643 | orchestrator | 2025-05-30 00:40:34 | INFO  | It takes a moment until task d326b310-34d4-4912-8d47-709ef4f1fe81 (pull-images) has been started and output is visible here. 2025-05-30 00:40:37.398586 | orchestrator | 2025-05-30 00:40:37.399881 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-30 00:40:37.400676 | orchestrator | 2025-05-30 00:40:37.401705 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-30 00:40:37.401897 | orchestrator | Friday 30 May 2025 00:40:37 +0000 (0:00:00.139) 0:00:00.139 ************ 2025-05-30 00:41:16.679169 | orchestrator | changed: [testbed-manager] 2025-05-30 00:41:16.679340 | orchestrator | 2025-05-30 00:41:16.679358 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-30 00:41:16.679792 | orchestrator | Friday 30 May 2025 00:41:16 +0000 (0:00:39.281) 0:00:39.420 ************ 2025-05-30 00:42:01.523673 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-30 00:42:01.523817 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-30 00:42:01.523835 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-30 00:42:01.523847 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-30 00:42:01.523858 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-30 00:42:01.523870 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-30 00:42:01.523882 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-30 00:42:01.523908 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-30 00:42:01.524095 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-30 00:42:01.524475 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-30 00:42:01.529027 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-30 00:42:01.529066 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-30 00:42:01.529078 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-30 00:42:01.529089 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-30 00:42:01.529100 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-30 00:42:01.529111 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-30 00:42:01.529121 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-30 00:42:01.529132 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-30 00:42:01.529143 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-30 00:42:01.529154 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-30 00:42:01.529164 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-30 00:42:01.529175 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-30 00:42:01.529186 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-30 00:42:01.529196 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-30 00:42:01.529207 | orchestrator | 2025-05-30 00:42:01.529219 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:42:01.530360 | orchestrator | 2025-05-30 00:42:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:42:01.530401 | orchestrator | 2025-05-30 00:42:01 | INFO  | Please wait and do not abort execution. 2025-05-30 00:42:01.530849 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:42:01.530872 | orchestrator | 2025-05-30 00:42:01.531248 | orchestrator | Friday 30 May 2025 00:42:01 +0000 (0:00:44.845) 0:01:24.266 ************ 2025-05-30 00:42:01.531545 | orchestrator | =============================================================================== 2025-05-30 00:42:01.531769 | orchestrator | Pull other images ------------------------------------------------------ 44.85s 2025-05-30 00:42:01.531957 | orchestrator | Pull keystone image ---------------------------------------------------- 39.28s 2025-05-30 00:42:03.647700 | orchestrator | 2025-05-30 00:42:03 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-30 00:42:03.695245 | orchestrator | 2025-05-30 00:42:03 | INFO  | Task cedb111d-e817-48b5-bcf4-0036629d5686 (wipe-partitions) was prepared for execution. 2025-05-30 00:42:03.695337 | orchestrator | 2025-05-30 00:42:03 | INFO  | It takes a moment until task cedb111d-e817-48b5-bcf4-0036629d5686 (wipe-partitions) has been started and output is visible here. 2025-05-30 00:42:06.749098 | orchestrator | 2025-05-30 00:42:06.749205 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-30 00:42:06.749232 | orchestrator | 2025-05-30 00:42:06.749253 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-30 00:42:06.749273 | orchestrator | Friday 30 May 2025 00:42:06 +0000 (0:00:00.122) 0:00:00.122 ************ 2025-05-30 00:42:07.330531 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:42:07.333129 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:42:07.333449 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:42:07.333537 | orchestrator | 2025-05-30 00:42:07.333726 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-30 00:42:07.333929 | orchestrator | Friday 30 May 2025 00:42:07 +0000 (0:00:00.590) 0:00:00.712 ************ 2025-05-30 00:42:07.498770 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:07.587265 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:07.587347 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:42:07.587361 | orchestrator | 2025-05-30 00:42:07.587450 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-30 00:42:07.587587 | orchestrator | Friday 30 May 2025 00:42:07 +0000 (0:00:00.257) 0:00:00.970 ************ 2025-05-30 00:42:08.297282 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:42:08.302005 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:42:08.302204 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:42:08.302298 | orchestrator | 2025-05-30 00:42:08.302693 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-30 00:42:08.302900 | orchestrator | Friday 30 May 2025 00:42:08 +0000 (0:00:00.705) 0:00:01.675 ************ 2025-05-30 00:42:08.460844 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:08.562279 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:08.562384 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:42:08.562400 | orchestrator | 2025-05-30 00:42:08.562415 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-30 00:42:08.562428 | orchestrator | Friday 30 May 2025 00:42:08 +0000 (0:00:00.268) 0:00:01.944 ************ 2025-05-30 00:42:09.764785 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-30 00:42:09.764889 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-30 00:42:09.765132 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-30 00:42:09.765437 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-30 00:42:09.765891 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-30 00:42:09.766316 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-30 00:42:09.766915 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-30 00:42:09.771114 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-30 00:42:09.771306 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-30 00:42:09.771754 | orchestrator | 2025-05-30 00:42:09.772115 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-30 00:42:09.772516 | orchestrator | Friday 30 May 2025 00:42:09 +0000 (0:00:01.202) 0:00:03.147 ************ 2025-05-30 00:42:11.165193 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-30 00:42:11.166825 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-30 00:42:11.170698 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-30 00:42:11.171305 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-30 00:42:11.172424 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-30 00:42:11.173770 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-30 00:42:11.174527 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-30 00:42:11.176080 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-30 00:42:11.176496 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-30 00:42:11.177890 | orchestrator | 2025-05-30 00:42:11.178469 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-30 00:42:11.179332 | orchestrator | Friday 30 May 2025 00:42:11 +0000 (0:00:01.399) 0:00:04.546 ************ 2025-05-30 00:42:13.451325 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-30 00:42:13.451502 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-30 00:42:13.451884 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-30 00:42:13.454130 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-30 00:42:13.454158 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-30 00:42:13.454461 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-30 00:42:13.454739 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-30 00:42:13.455888 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-30 00:42:13.455909 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-30 00:42:13.455922 | orchestrator | 2025-05-30 00:42:13.456215 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-30 00:42:13.456372 | orchestrator | Friday 30 May 2025 00:42:13 +0000 (0:00:02.277) 0:00:06.824 ************ 2025-05-30 00:42:14.037746 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:42:14.037850 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:42:14.038141 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:42:14.038906 | orchestrator | 2025-05-30 00:42:14.039377 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-30 00:42:14.040046 | orchestrator | Friday 30 May 2025 00:42:14 +0000 (0:00:00.593) 0:00:07.418 ************ 2025-05-30 00:42:14.680014 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:42:14.681380 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:42:14.681434 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:42:14.682592 | orchestrator | 2025-05-30 00:42:14.683497 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:42:14.684126 | orchestrator | 2025-05-30 00:42:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:42:14.684294 | orchestrator | 2025-05-30 00:42:14 | INFO  | Please wait and do not abort execution. 2025-05-30 00:42:14.686164 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:42:14.686199 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:42:14.687072 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:42:14.687890 | orchestrator | 2025-05-30 00:42:14.688566 | orchestrator | Friday 30 May 2025 00:42:14 +0000 (0:00:00.641) 0:00:08.059 ************ 2025-05-30 00:42:14.689058 | orchestrator | =============================================================================== 2025-05-30 00:42:14.695076 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.28s 2025-05-30 00:42:14.695154 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.40s 2025-05-30 00:42:14.695178 | orchestrator | Check device availability ----------------------------------------------- 1.20s 2025-05-30 00:42:14.695201 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.71s 2025-05-30 00:42:14.695221 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2025-05-30 00:42:14.695239 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-05-30 00:42:14.695282 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-05-30 00:42:14.695302 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-05-30 00:42:14.695338 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2025-05-30 00:42:16.780676 | orchestrator | 2025-05-30 00:42:16 | INFO  | Task f7289919-2e96-47b2-9882-5484b113d9ee (facts) was prepared for execution. 2025-05-30 00:42:16.780768 | orchestrator | 2025-05-30 00:42:16 | INFO  | It takes a moment until task f7289919-2e96-47b2-9882-5484b113d9ee (facts) has been started and output is visible here. 2025-05-30 00:42:19.940057 | orchestrator | 2025-05-30 00:42:19.942203 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-30 00:42:19.942240 | orchestrator | 2025-05-30 00:42:19.942253 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-30 00:42:19.942265 | orchestrator | Friday 30 May 2025 00:42:19 +0000 (0:00:00.172) 0:00:00.172 ************ 2025-05-30 00:42:20.862720 | orchestrator | ok: [testbed-manager] 2025-05-30 00:42:20.864036 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:42:20.865268 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:42:20.868366 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:42:20.868402 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:42:20.868910 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:42:20.869591 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:42:20.870134 | orchestrator | 2025-05-30 00:42:20.870603 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-30 00:42:20.871196 | orchestrator | Friday 30 May 2025 00:42:20 +0000 (0:00:00.921) 0:00:01.094 ************ 2025-05-30 00:42:21.003389 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:42:21.073346 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:42:21.152834 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:42:21.222724 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:42:21.289048 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:21.905569 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:21.906845 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:42:21.909180 | orchestrator | 2025-05-30 00:42:21.910009 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-30 00:42:21.913115 | orchestrator | 2025-05-30 00:42:21.913157 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-30 00:42:21.913847 | orchestrator | Friday 30 May 2025 00:42:21 +0000 (0:00:01.044) 0:00:02.138 ************ 2025-05-30 00:42:26.454324 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:42:26.459937 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:42:26.460002 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:42:26.462348 | orchestrator | ok: [testbed-manager] 2025-05-30 00:42:26.467604 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:42:26.468284 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:42:26.469245 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:42:26.469950 | orchestrator | 2025-05-30 00:42:26.471024 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-30 00:42:26.471376 | orchestrator | 2025-05-30 00:42:26.472268 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-30 00:42:26.472933 | orchestrator | Friday 30 May 2025 00:42:26 +0000 (0:00:04.547) 0:00:06.686 ************ 2025-05-30 00:42:26.787449 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:42:26.859981 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:42:26.946106 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:42:27.028503 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:42:27.114322 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:27.149561 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:27.150493 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:42:27.151276 | orchestrator | 2025-05-30 00:42:27.152994 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:42:27.153060 | orchestrator | 2025-05-30 00:42:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:42:27.153129 | orchestrator | 2025-05-30 00:42:27 | INFO  | Please wait and do not abort execution. 2025-05-30 00:42:27.154434 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:42:27.154776 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:42:27.155755 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:42:27.156426 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:42:27.156993 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:42:27.158679 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:42:27.159400 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:42:27.159537 | orchestrator | 2025-05-30 00:42:27.161050 | orchestrator | Friday 30 May 2025 00:42:27 +0000 (0:00:00.697) 0:00:07.384 ************ 2025-05-30 00:42:27.161075 | orchestrator | =============================================================================== 2025-05-30 00:42:27.162062 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.55s 2025-05-30 00:42:27.162344 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.04s 2025-05-30 00:42:27.162979 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.92s 2025-05-30 00:42:27.163400 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.70s 2025-05-30 00:42:29.661593 | orchestrator | 2025-05-30 00:42:29 | INFO  | Task 878afca3-bcf7-4088-ba86-bcec49e0c3d8 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-30 00:42:29.661751 | orchestrator | 2025-05-30 00:42:29 | INFO  | It takes a moment until task 878afca3-bcf7-4088-ba86-bcec49e0c3d8 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-30 00:42:32.688574 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-30 00:42:33.205860 | orchestrator | 2025-05-30 00:42:33.205963 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-30 00:42:33.205978 | orchestrator | 2025-05-30 00:42:33.205989 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-30 00:42:33.205999 | orchestrator | Friday 30 May 2025 00:42:33 +0000 (0:00:00.441) 0:00:00.441 ************ 2025-05-30 00:42:33.513773 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-30 00:42:33.517012 | orchestrator | 2025-05-30 00:42:33.517799 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-30 00:42:33.520290 | orchestrator | Friday 30 May 2025 00:42:33 +0000 (0:00:00.312) 0:00:00.754 ************ 2025-05-30 00:42:33.687136 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:42:33.687860 | orchestrator | 2025-05-30 00:42:33.687967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:33.688223 | orchestrator | Friday 30 May 2025 00:42:33 +0000 (0:00:00.171) 0:00:00.925 ************ 2025-05-30 00:42:34.109890 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-30 00:42:34.110199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-30 00:42:34.110255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-30 00:42:34.110319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-30 00:42:34.114159 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-30 00:42:34.114222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-30 00:42:34.114498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-30 00:42:34.115334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-30 00:42:34.115489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-30 00:42:34.116830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-30 00:42:34.118083 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-30 00:42:34.119514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-30 00:42:34.119821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-30 00:42:34.119842 | orchestrator | 2025-05-30 00:42:34.119855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:34.119869 | orchestrator | Friday 30 May 2025 00:42:34 +0000 (0:00:00.425) 0:00:01.350 ************ 2025-05-30 00:42:34.344566 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:34.344763 | orchestrator | 2025-05-30 00:42:34.344864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:34.345841 | orchestrator | Friday 30 May 2025 00:42:34 +0000 (0:00:00.231) 0:00:01.582 ************ 2025-05-30 00:42:34.507269 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:34.507369 | orchestrator | 2025-05-30 00:42:34.507483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:34.507688 | orchestrator | Friday 30 May 2025 00:42:34 +0000 (0:00:00.165) 0:00:01.748 ************ 2025-05-30 00:42:34.673243 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:34.673351 | orchestrator | 2025-05-30 00:42:34.673368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:34.674842 | orchestrator | Friday 30 May 2025 00:42:34 +0000 (0:00:00.163) 0:00:01.911 ************ 2025-05-30 00:42:34.836898 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:34.837002 | orchestrator | 2025-05-30 00:42:34.837085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:34.837122 | orchestrator | Friday 30 May 2025 00:42:34 +0000 (0:00:00.165) 0:00:02.077 ************ 2025-05-30 00:42:35.015994 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:35.016305 | orchestrator | 2025-05-30 00:42:35.016662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:35.017166 | orchestrator | Friday 30 May 2025 00:42:35 +0000 (0:00:00.179) 0:00:02.256 ************ 2025-05-30 00:42:35.189236 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:35.190736 | orchestrator | 2025-05-30 00:42:35.191241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:35.191729 | orchestrator | Friday 30 May 2025 00:42:35 +0000 (0:00:00.172) 0:00:02.429 ************ 2025-05-30 00:42:35.346804 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:35.351350 | orchestrator | 2025-05-30 00:42:35.351393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:35.351403 | orchestrator | Friday 30 May 2025 00:42:35 +0000 (0:00:00.158) 0:00:02.587 ************ 2025-05-30 00:42:35.502855 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:35.502952 | orchestrator | 2025-05-30 00:42:35.505122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:35.507266 | orchestrator | Friday 30 May 2025 00:42:35 +0000 (0:00:00.153) 0:00:02.740 ************ 2025-05-30 00:42:36.019545 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d) 2025-05-30 00:42:36.022936 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d) 2025-05-30 00:42:36.023370 | orchestrator | 2025-05-30 00:42:36.023992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:36.024945 | orchestrator | Friday 30 May 2025 00:42:36 +0000 (0:00:00.518) 0:00:03.259 ************ 2025-05-30 00:42:36.634177 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5232ed07-4d85-4988-9bc7-7d761a8f0a42) 2025-05-30 00:42:36.636972 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5232ed07-4d85-4988-9bc7-7d761a8f0a42) 2025-05-30 00:42:36.638155 | orchestrator | 2025-05-30 00:42:36.639076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:36.640151 | orchestrator | Friday 30 May 2025 00:42:36 +0000 (0:00:00.613) 0:00:03.873 ************ 2025-05-30 00:42:37.046144 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d57cbd6a-67f1-4040-83cf-671f4c3c6a1f) 2025-05-30 00:42:37.048254 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d57cbd6a-67f1-4040-83cf-671f4c3c6a1f) 2025-05-30 00:42:37.049063 | orchestrator | 2025-05-30 00:42:37.049892 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:37.050145 | orchestrator | Friday 30 May 2025 00:42:37 +0000 (0:00:00.410) 0:00:04.283 ************ 2025-05-30 00:42:37.452350 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_76f37bde-13ed-44ba-8084-a2417c9798d9) 2025-05-30 00:42:37.453031 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_76f37bde-13ed-44ba-8084-a2417c9798d9) 2025-05-30 00:42:37.453448 | orchestrator | 2025-05-30 00:42:37.454177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:37.454603 | orchestrator | Friday 30 May 2025 00:42:37 +0000 (0:00:00.409) 0:00:04.692 ************ 2025-05-30 00:42:37.755663 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-30 00:42:37.755773 | orchestrator | 2025-05-30 00:42:37.756547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:37.757045 | orchestrator | Friday 30 May 2025 00:42:37 +0000 (0:00:00.300) 0:00:04.993 ************ 2025-05-30 00:42:38.108976 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-30 00:42:38.113380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-30 00:42:38.114121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-30 00:42:38.115260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-30 00:42:38.116388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-30 00:42:38.117225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-30 00:42:38.117262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-30 00:42:38.117742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-30 00:42:38.118429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-30 00:42:38.118973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-30 00:42:38.119355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-30 00:42:38.120585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-30 00:42:38.121031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-30 00:42:38.121346 | orchestrator | 2025-05-30 00:42:38.122585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:38.122821 | orchestrator | Friday 30 May 2025 00:42:38 +0000 (0:00:00.354) 0:00:05.348 ************ 2025-05-30 00:42:38.294997 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:38.296035 | orchestrator | 2025-05-30 00:42:38.298409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:38.298794 | orchestrator | Friday 30 May 2025 00:42:38 +0000 (0:00:00.184) 0:00:05.532 ************ 2025-05-30 00:42:38.479305 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:38.479564 | orchestrator | 2025-05-30 00:42:38.480852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:38.481299 | orchestrator | Friday 30 May 2025 00:42:38 +0000 (0:00:00.186) 0:00:05.718 ************ 2025-05-30 00:42:38.666298 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:38.666505 | orchestrator | 2025-05-30 00:42:38.667780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:38.670654 | orchestrator | Friday 30 May 2025 00:42:38 +0000 (0:00:00.186) 0:00:05.905 ************ 2025-05-30 00:42:38.882470 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:38.882661 | orchestrator | 2025-05-30 00:42:38.882763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:38.883188 | orchestrator | Friday 30 May 2025 00:42:38 +0000 (0:00:00.218) 0:00:06.123 ************ 2025-05-30 00:42:39.062304 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:39.062561 | orchestrator | 2025-05-30 00:42:39.066948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:39.066994 | orchestrator | Friday 30 May 2025 00:42:39 +0000 (0:00:00.177) 0:00:06.301 ************ 2025-05-30 00:42:39.700051 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:39.702743 | orchestrator | 2025-05-30 00:42:39.702817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:39.702882 | orchestrator | Friday 30 May 2025 00:42:39 +0000 (0:00:00.639) 0:00:06.940 ************ 2025-05-30 00:42:39.896443 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:39.896611 | orchestrator | 2025-05-30 00:42:39.897218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:39.898005 | orchestrator | Friday 30 May 2025 00:42:39 +0000 (0:00:00.195) 0:00:07.136 ************ 2025-05-30 00:42:40.139871 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:40.140434 | orchestrator | 2025-05-30 00:42:40.141369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:40.145525 | orchestrator | Friday 30 May 2025 00:42:40 +0000 (0:00:00.233) 0:00:07.370 ************ 2025-05-30 00:42:40.904715 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-30 00:42:40.904815 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-30 00:42:40.905170 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-30 00:42:40.905598 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-30 00:42:40.905980 | orchestrator | 2025-05-30 00:42:40.906478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:40.907740 | orchestrator | Friday 30 May 2025 00:42:40 +0000 (0:00:00.770) 0:00:08.140 ************ 2025-05-30 00:42:41.113672 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:41.114538 | orchestrator | 2025-05-30 00:42:41.116972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:41.117290 | orchestrator | Friday 30 May 2025 00:42:41 +0000 (0:00:00.212) 0:00:08.352 ************ 2025-05-30 00:42:41.331914 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:41.332469 | orchestrator | 2025-05-30 00:42:41.332745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:41.335038 | orchestrator | Friday 30 May 2025 00:42:41 +0000 (0:00:00.218) 0:00:08.571 ************ 2025-05-30 00:42:41.593466 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:41.595931 | orchestrator | 2025-05-30 00:42:41.595972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:41.595986 | orchestrator | Friday 30 May 2025 00:42:41 +0000 (0:00:00.257) 0:00:08.828 ************ 2025-05-30 00:42:41.865695 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:41.865793 | orchestrator | 2025-05-30 00:42:41.865935 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-30 00:42:41.866262 | orchestrator | Friday 30 May 2025 00:42:41 +0000 (0:00:00.273) 0:00:09.102 ************ 2025-05-30 00:42:42.070423 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-30 00:42:42.070526 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-30 00:42:42.070940 | orchestrator | 2025-05-30 00:42:42.073311 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-30 00:42:42.073405 | orchestrator | Friday 30 May 2025 00:42:42 +0000 (0:00:00.205) 0:00:09.308 ************ 2025-05-30 00:42:42.224260 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:42.224423 | orchestrator | 2025-05-30 00:42:42.224742 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-30 00:42:42.225028 | orchestrator | Friday 30 May 2025 00:42:42 +0000 (0:00:00.156) 0:00:09.464 ************ 2025-05-30 00:42:42.369597 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:42.369784 | orchestrator | 2025-05-30 00:42:42.374328 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-30 00:42:42.374414 | orchestrator | Friday 30 May 2025 00:42:42 +0000 (0:00:00.145) 0:00:09.610 ************ 2025-05-30 00:42:42.615943 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:42.616049 | orchestrator | 2025-05-30 00:42:42.616226 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-30 00:42:42.616518 | orchestrator | Friday 30 May 2025 00:42:42 +0000 (0:00:00.246) 0:00:09.856 ************ 2025-05-30 00:42:42.733687 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:42:42.734905 | orchestrator | 2025-05-30 00:42:42.734960 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-30 00:42:42.734975 | orchestrator | Friday 30 May 2025 00:42:42 +0000 (0:00:00.115) 0:00:09.971 ************ 2025-05-30 00:42:42.882408 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6d0cb66e-f8af-5d02-a2d6-05303feeced3'}}) 2025-05-30 00:42:42.882515 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f43ff32d-4fc4-5ece-8353-26072ce1c913'}}) 2025-05-30 00:42:42.886270 | orchestrator | 2025-05-30 00:42:42.886319 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-30 00:42:42.886332 | orchestrator | Friday 30 May 2025 00:42:42 +0000 (0:00:00.148) 0:00:10.120 ************ 2025-05-30 00:42:43.008263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6d0cb66e-f8af-5d02-a2d6-05303feeced3'}})  2025-05-30 00:42:43.008364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f43ff32d-4fc4-5ece-8353-26072ce1c913'}})  2025-05-30 00:42:43.008469 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:43.008486 | orchestrator | 2025-05-30 00:42:43.010538 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-30 00:42:43.010741 | orchestrator | Friday 30 May 2025 00:42:43 +0000 (0:00:00.127) 0:00:10.248 ************ 2025-05-30 00:42:43.146846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6d0cb66e-f8af-5d02-a2d6-05303feeced3'}})  2025-05-30 00:42:43.146949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f43ff32d-4fc4-5ece-8353-26072ce1c913'}})  2025-05-30 00:42:43.147057 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:43.147077 | orchestrator | 2025-05-30 00:42:43.147302 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-30 00:42:43.147524 | orchestrator | Friday 30 May 2025 00:42:43 +0000 (0:00:00.136) 0:00:10.384 ************ 2025-05-30 00:42:43.290998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6d0cb66e-f8af-5d02-a2d6-05303feeced3'}})  2025-05-30 00:42:43.291097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f43ff32d-4fc4-5ece-8353-26072ce1c913'}})  2025-05-30 00:42:43.291187 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:43.291292 | orchestrator | 2025-05-30 00:42:43.291391 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-30 00:42:43.291944 | orchestrator | Friday 30 May 2025 00:42:43 +0000 (0:00:00.147) 0:00:10.531 ************ 2025-05-30 00:42:43.402799 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:42:43.402987 | orchestrator | 2025-05-30 00:42:43.403009 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-30 00:42:43.403214 | orchestrator | Friday 30 May 2025 00:42:43 +0000 (0:00:00.111) 0:00:10.643 ************ 2025-05-30 00:42:43.496527 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:42:43.497250 | orchestrator | 2025-05-30 00:42:43.498877 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-30 00:42:43.499184 | orchestrator | Friday 30 May 2025 00:42:43 +0000 (0:00:00.092) 0:00:10.736 ************ 2025-05-30 00:42:43.625663 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:43.627290 | orchestrator | 2025-05-30 00:42:43.629849 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-30 00:42:43.630243 | orchestrator | Friday 30 May 2025 00:42:43 +0000 (0:00:00.128) 0:00:10.865 ************ 2025-05-30 00:42:43.738146 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:43.738934 | orchestrator | 2025-05-30 00:42:43.740315 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-30 00:42:43.742541 | orchestrator | Friday 30 May 2025 00:42:43 +0000 (0:00:00.112) 0:00:10.977 ************ 2025-05-30 00:42:43.855470 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:43.855719 | orchestrator | 2025-05-30 00:42:43.856910 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-30 00:42:43.858359 | orchestrator | Friday 30 May 2025 00:42:43 +0000 (0:00:00.115) 0:00:11.093 ************ 2025-05-30 00:42:44.159045 | orchestrator | ok: [testbed-node-3] => { 2025-05-30 00:42:44.160029 | orchestrator |  "ceph_osd_devices": { 2025-05-30 00:42:44.165291 | orchestrator |  "sdb": { 2025-05-30 00:42:44.165344 | orchestrator |  "osd_lvm_uuid": "6d0cb66e-f8af-5d02-a2d6-05303feeced3" 2025-05-30 00:42:44.165516 | orchestrator |  }, 2025-05-30 00:42:44.167034 | orchestrator |  "sdc": { 2025-05-30 00:42:44.167810 | orchestrator |  "osd_lvm_uuid": "f43ff32d-4fc4-5ece-8353-26072ce1c913" 2025-05-30 00:42:44.167889 | orchestrator |  } 2025-05-30 00:42:44.168318 | orchestrator |  } 2025-05-30 00:42:44.168699 | orchestrator | } 2025-05-30 00:42:44.170217 | orchestrator | 2025-05-30 00:42:44.170243 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-30 00:42:44.170255 | orchestrator | Friday 30 May 2025 00:42:44 +0000 (0:00:00.306) 0:00:11.399 ************ 2025-05-30 00:42:44.290872 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:44.291023 | orchestrator | 2025-05-30 00:42:44.292600 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-30 00:42:44.294499 | orchestrator | Friday 30 May 2025 00:42:44 +0000 (0:00:00.130) 0:00:11.530 ************ 2025-05-30 00:42:44.433647 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:44.433751 | orchestrator | 2025-05-30 00:42:44.433881 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-30 00:42:44.433901 | orchestrator | Friday 30 May 2025 00:42:44 +0000 (0:00:00.143) 0:00:11.673 ************ 2025-05-30 00:42:44.557279 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:42:44.557593 | orchestrator | 2025-05-30 00:42:44.558424 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-30 00:42:44.558702 | orchestrator | Friday 30 May 2025 00:42:44 +0000 (0:00:00.123) 0:00:11.797 ************ 2025-05-30 00:42:44.831611 | orchestrator | changed: [testbed-node-3] => { 2025-05-30 00:42:44.835902 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-30 00:42:44.835949 | orchestrator |  "ceph_osd_devices": { 2025-05-30 00:42:44.836975 | orchestrator |  "sdb": { 2025-05-30 00:42:44.838545 | orchestrator |  "osd_lvm_uuid": "6d0cb66e-f8af-5d02-a2d6-05303feeced3" 2025-05-30 00:42:44.839326 | orchestrator |  }, 2025-05-30 00:42:44.839857 | orchestrator |  "sdc": { 2025-05-30 00:42:44.840506 | orchestrator |  "osd_lvm_uuid": "f43ff32d-4fc4-5ece-8353-26072ce1c913" 2025-05-30 00:42:44.840893 | orchestrator |  } 2025-05-30 00:42:44.841482 | orchestrator |  }, 2025-05-30 00:42:44.842116 | orchestrator |  "lvm_volumes": [ 2025-05-30 00:42:44.842435 | orchestrator |  { 2025-05-30 00:42:44.843024 | orchestrator |  "data": "osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3", 2025-05-30 00:42:44.843363 | orchestrator |  "data_vg": "ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3" 2025-05-30 00:42:44.843979 | orchestrator |  }, 2025-05-30 00:42:44.844200 | orchestrator |  { 2025-05-30 00:42:44.844775 | orchestrator |  "data": "osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913", 2025-05-30 00:42:44.845480 | orchestrator |  "data_vg": "ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913" 2025-05-30 00:42:44.845690 | orchestrator |  } 2025-05-30 00:42:44.846167 | orchestrator |  ] 2025-05-30 00:42:44.846460 | orchestrator |  } 2025-05-30 00:42:44.846939 | orchestrator | } 2025-05-30 00:42:44.847682 | orchestrator | 2025-05-30 00:42:44.848138 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-30 00:42:44.848463 | orchestrator | Friday 30 May 2025 00:42:44 +0000 (0:00:00.270) 0:00:12.068 ************ 2025-05-30 00:42:46.617163 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-30 00:42:46.617461 | orchestrator | 2025-05-30 00:42:46.618095 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-30 00:42:46.618383 | orchestrator | 2025-05-30 00:42:46.618764 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-30 00:42:46.619334 | orchestrator | Friday 30 May 2025 00:42:46 +0000 (0:00:01.784) 0:00:13.853 ************ 2025-05-30 00:42:46.839745 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-30 00:42:46.842711 | orchestrator | 2025-05-30 00:42:46.842744 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-30 00:42:46.842757 | orchestrator | Friday 30 May 2025 00:42:46 +0000 (0:00:00.227) 0:00:14.080 ************ 2025-05-30 00:42:47.052688 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:42:47.053999 | orchestrator | 2025-05-30 00:42:47.054434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:47.054774 | orchestrator | Friday 30 May 2025 00:42:47 +0000 (0:00:00.212) 0:00:14.293 ************ 2025-05-30 00:42:47.398930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-30 00:42:47.399035 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-30 00:42:47.399606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-30 00:42:47.399912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-30 00:42:47.400907 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-30 00:42:47.401004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-30 00:42:47.401356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-30 00:42:47.402125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-30 00:42:47.402200 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-30 00:42:47.402816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-30 00:42:47.403569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-30 00:42:47.403695 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-30 00:42:47.404257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-30 00:42:47.404678 | orchestrator | 2025-05-30 00:42:47.405150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:47.406168 | orchestrator | Friday 30 May 2025 00:42:47 +0000 (0:00:00.340) 0:00:14.633 ************ 2025-05-30 00:42:47.594254 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:47.595506 | orchestrator | 2025-05-30 00:42:47.595720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:47.596044 | orchestrator | Friday 30 May 2025 00:42:47 +0000 (0:00:00.201) 0:00:14.835 ************ 2025-05-30 00:42:47.779016 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:47.779113 | orchestrator | 2025-05-30 00:42:47.779127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:47.779215 | orchestrator | Friday 30 May 2025 00:42:47 +0000 (0:00:00.182) 0:00:15.018 ************ 2025-05-30 00:42:47.932146 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:47.932270 | orchestrator | 2025-05-30 00:42:47.935424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:47.935484 | orchestrator | Friday 30 May 2025 00:42:47 +0000 (0:00:00.153) 0:00:15.171 ************ 2025-05-30 00:42:48.100571 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:48.100840 | orchestrator | 2025-05-30 00:42:48.102860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:48.103125 | orchestrator | Friday 30 May 2025 00:42:48 +0000 (0:00:00.168) 0:00:15.340 ************ 2025-05-30 00:42:48.384063 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:48.384166 | orchestrator | 2025-05-30 00:42:48.384182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:48.384408 | orchestrator | Friday 30 May 2025 00:42:48 +0000 (0:00:00.281) 0:00:15.621 ************ 2025-05-30 00:42:48.557073 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:48.557983 | orchestrator | 2025-05-30 00:42:48.559781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:48.559806 | orchestrator | Friday 30 May 2025 00:42:48 +0000 (0:00:00.176) 0:00:15.797 ************ 2025-05-30 00:42:48.742712 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:48.744146 | orchestrator | 2025-05-30 00:42:48.744939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:48.745488 | orchestrator | Friday 30 May 2025 00:42:48 +0000 (0:00:00.183) 0:00:15.980 ************ 2025-05-30 00:42:48.916119 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:48.917373 | orchestrator | 2025-05-30 00:42:48.918329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:48.919103 | orchestrator | Friday 30 May 2025 00:42:48 +0000 (0:00:00.172) 0:00:16.153 ************ 2025-05-30 00:42:49.284942 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51) 2025-05-30 00:42:49.285556 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51) 2025-05-30 00:42:49.286328 | orchestrator | 2025-05-30 00:42:49.286587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:49.290210 | orchestrator | Friday 30 May 2025 00:42:49 +0000 (0:00:00.372) 0:00:16.526 ************ 2025-05-30 00:42:49.708547 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_173bbd31-d008-4662-8aea-7cfb1ab21884) 2025-05-30 00:42:49.711978 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_173bbd31-d008-4662-8aea-7cfb1ab21884) 2025-05-30 00:42:49.714128 | orchestrator | 2025-05-30 00:42:49.715274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:49.716243 | orchestrator | Friday 30 May 2025 00:42:49 +0000 (0:00:00.421) 0:00:16.947 ************ 2025-05-30 00:42:50.087738 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fd28e93c-f7f0-4d71-9af0-3817aadd609f) 2025-05-30 00:42:50.088340 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fd28e93c-f7f0-4d71-9af0-3817aadd609f) 2025-05-30 00:42:50.088891 | orchestrator | 2025-05-30 00:42:50.089297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:50.089508 | orchestrator | Friday 30 May 2025 00:42:50 +0000 (0:00:00.380) 0:00:17.328 ************ 2025-05-30 00:42:50.478934 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fcd55a48-2b4a-45aa-bb97-767fc341b1ef) 2025-05-30 00:42:50.479311 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fcd55a48-2b4a-45aa-bb97-767fc341b1ef) 2025-05-30 00:42:50.480136 | orchestrator | 2025-05-30 00:42:50.481049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:50.481872 | orchestrator | Friday 30 May 2025 00:42:50 +0000 (0:00:00.391) 0:00:17.719 ************ 2025-05-30 00:42:50.792792 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-30 00:42:50.794399 | orchestrator | 2025-05-30 00:42:50.794434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:50.794446 | orchestrator | Friday 30 May 2025 00:42:50 +0000 (0:00:00.312) 0:00:18.032 ************ 2025-05-30 00:42:51.089284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-30 00:42:51.090703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-30 00:42:51.091740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-30 00:42:51.093394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-30 00:42:51.094170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-30 00:42:51.094619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-30 00:42:51.095410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-30 00:42:51.096083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-30 00:42:51.096411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-30 00:42:51.097181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-30 00:42:51.097513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-30 00:42:51.098313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-30 00:42:51.098765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-30 00:42:51.099206 | orchestrator | 2025-05-30 00:42:51.099781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:51.100174 | orchestrator | Friday 30 May 2025 00:42:51 +0000 (0:00:00.294) 0:00:18.326 ************ 2025-05-30 00:42:51.578475 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:51.578693 | orchestrator | 2025-05-30 00:42:51.580463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:51.582012 | orchestrator | Friday 30 May 2025 00:42:51 +0000 (0:00:00.491) 0:00:18.817 ************ 2025-05-30 00:42:51.764581 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:51.765990 | orchestrator | 2025-05-30 00:42:51.766410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:51.767421 | orchestrator | Friday 30 May 2025 00:42:51 +0000 (0:00:00.186) 0:00:19.004 ************ 2025-05-30 00:42:51.957494 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:51.957611 | orchestrator | 2025-05-30 00:42:51.958091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:51.958688 | orchestrator | Friday 30 May 2025 00:42:51 +0000 (0:00:00.189) 0:00:19.194 ************ 2025-05-30 00:42:52.145974 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:52.146549 | orchestrator | 2025-05-30 00:42:52.147798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:52.148343 | orchestrator | Friday 30 May 2025 00:42:52 +0000 (0:00:00.191) 0:00:19.385 ************ 2025-05-30 00:42:52.341106 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:52.341908 | orchestrator | 2025-05-30 00:42:52.342413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:52.344610 | orchestrator | Friday 30 May 2025 00:42:52 +0000 (0:00:00.194) 0:00:19.580 ************ 2025-05-30 00:42:52.520805 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:52.520997 | orchestrator | 2025-05-30 00:42:52.521793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:52.522865 | orchestrator | Friday 30 May 2025 00:42:52 +0000 (0:00:00.179) 0:00:19.759 ************ 2025-05-30 00:42:52.698688 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:52.699243 | orchestrator | 2025-05-30 00:42:52.699456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:52.700278 | orchestrator | Friday 30 May 2025 00:42:52 +0000 (0:00:00.178) 0:00:19.938 ************ 2025-05-30 00:42:52.893408 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:52.893624 | orchestrator | 2025-05-30 00:42:52.895155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:52.896717 | orchestrator | Friday 30 May 2025 00:42:52 +0000 (0:00:00.194) 0:00:20.132 ************ 2025-05-30 00:42:53.637535 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-30 00:42:53.640104 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-30 00:42:53.641564 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-30 00:42:53.641854 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-30 00:42:53.642200 | orchestrator | 2025-05-30 00:42:53.642673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:53.642992 | orchestrator | Friday 30 May 2025 00:42:53 +0000 (0:00:00.743) 0:00:20.875 ************ 2025-05-30 00:42:53.819837 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:53.819945 | orchestrator | 2025-05-30 00:42:53.820117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:53.820179 | orchestrator | Friday 30 May 2025 00:42:53 +0000 (0:00:00.184) 0:00:21.059 ************ 2025-05-30 00:42:54.300552 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:54.300805 | orchestrator | 2025-05-30 00:42:54.300927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:54.301538 | orchestrator | Friday 30 May 2025 00:42:54 +0000 (0:00:00.480) 0:00:21.540 ************ 2025-05-30 00:42:54.489503 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:54.489724 | orchestrator | 2025-05-30 00:42:54.490952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:42:54.492133 | orchestrator | Friday 30 May 2025 00:42:54 +0000 (0:00:00.189) 0:00:21.729 ************ 2025-05-30 00:42:54.671968 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:54.673107 | orchestrator | 2025-05-30 00:42:54.674250 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-30 00:42:54.674979 | orchestrator | Friday 30 May 2025 00:42:54 +0000 (0:00:00.180) 0:00:21.910 ************ 2025-05-30 00:42:54.833102 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-30 00:42:54.834198 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-30 00:42:54.839276 | orchestrator | 2025-05-30 00:42:54.839307 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-30 00:42:54.840319 | orchestrator | Friday 30 May 2025 00:42:54 +0000 (0:00:00.160) 0:00:22.071 ************ 2025-05-30 00:42:54.964513 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:54.965129 | orchestrator | 2025-05-30 00:42:54.966330 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-30 00:42:54.967220 | orchestrator | Friday 30 May 2025 00:42:54 +0000 (0:00:00.131) 0:00:22.202 ************ 2025-05-30 00:42:55.114860 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:55.115972 | orchestrator | 2025-05-30 00:42:55.117610 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-30 00:42:55.118677 | orchestrator | Friday 30 May 2025 00:42:55 +0000 (0:00:00.151) 0:00:22.353 ************ 2025-05-30 00:42:55.250802 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:55.251686 | orchestrator | 2025-05-30 00:42:55.252819 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-30 00:42:55.257288 | orchestrator | Friday 30 May 2025 00:42:55 +0000 (0:00:00.135) 0:00:22.489 ************ 2025-05-30 00:42:55.404023 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:42:55.404721 | orchestrator | 2025-05-30 00:42:55.406500 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-30 00:42:55.407353 | orchestrator | Friday 30 May 2025 00:42:55 +0000 (0:00:00.153) 0:00:22.642 ************ 2025-05-30 00:42:55.590235 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50b3064c-7478-543e-8abf-661fdbdc95ce'}}) 2025-05-30 00:42:55.592071 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '749c70bc-bf8f-56a3-a425-711d4530659c'}}) 2025-05-30 00:42:55.593683 | orchestrator | 2025-05-30 00:42:55.594900 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-30 00:42:55.596095 | orchestrator | Friday 30 May 2025 00:42:55 +0000 (0:00:00.185) 0:00:22.828 ************ 2025-05-30 00:42:55.751181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50b3064c-7478-543e-8abf-661fdbdc95ce'}})  2025-05-30 00:42:55.752350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '749c70bc-bf8f-56a3-a425-711d4530659c'}})  2025-05-30 00:42:55.753486 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:55.754550 | orchestrator | 2025-05-30 00:42:55.755569 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-30 00:42:55.756444 | orchestrator | Friday 30 May 2025 00:42:55 +0000 (0:00:00.161) 0:00:22.990 ************ 2025-05-30 00:42:55.926192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50b3064c-7478-543e-8abf-661fdbdc95ce'}})  2025-05-30 00:42:55.927750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '749c70bc-bf8f-56a3-a425-711d4530659c'}})  2025-05-30 00:42:55.929082 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:55.931093 | orchestrator | 2025-05-30 00:42:55.931120 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-30 00:42:55.931954 | orchestrator | Friday 30 May 2025 00:42:55 +0000 (0:00:00.174) 0:00:23.164 ************ 2025-05-30 00:42:56.294545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50b3064c-7478-543e-8abf-661fdbdc95ce'}})  2025-05-30 00:42:56.295624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '749c70bc-bf8f-56a3-a425-711d4530659c'}})  2025-05-30 00:42:56.296872 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:56.299112 | orchestrator | 2025-05-30 00:42:56.299143 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-30 00:42:56.299157 | orchestrator | Friday 30 May 2025 00:42:56 +0000 (0:00:00.369) 0:00:23.534 ************ 2025-05-30 00:42:56.445792 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:42:56.446002 | orchestrator | 2025-05-30 00:42:56.447004 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-30 00:42:56.447854 | orchestrator | Friday 30 May 2025 00:42:56 +0000 (0:00:00.151) 0:00:23.685 ************ 2025-05-30 00:42:56.593510 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:42:56.593755 | orchestrator | 2025-05-30 00:42:56.596147 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-30 00:42:56.599083 | orchestrator | Friday 30 May 2025 00:42:56 +0000 (0:00:00.147) 0:00:23.833 ************ 2025-05-30 00:42:56.730700 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:56.731238 | orchestrator | 2025-05-30 00:42:56.731914 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-30 00:42:56.733112 | orchestrator | Friday 30 May 2025 00:42:56 +0000 (0:00:00.136) 0:00:23.969 ************ 2025-05-30 00:42:56.869440 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:56.870285 | orchestrator | 2025-05-30 00:42:56.871147 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-30 00:42:56.871994 | orchestrator | Friday 30 May 2025 00:42:56 +0000 (0:00:00.138) 0:00:24.107 ************ 2025-05-30 00:42:57.015329 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:57.016259 | orchestrator | 2025-05-30 00:42:57.017147 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-30 00:42:57.019970 | orchestrator | Friday 30 May 2025 00:42:57 +0000 (0:00:00.145) 0:00:24.253 ************ 2025-05-30 00:42:57.155362 | orchestrator | ok: [testbed-node-4] => { 2025-05-30 00:42:57.155961 | orchestrator |  "ceph_osd_devices": { 2025-05-30 00:42:57.156447 | orchestrator |  "sdb": { 2025-05-30 00:42:57.157576 | orchestrator |  "osd_lvm_uuid": "50b3064c-7478-543e-8abf-661fdbdc95ce" 2025-05-30 00:42:57.158400 | orchestrator |  }, 2025-05-30 00:42:57.159209 | orchestrator |  "sdc": { 2025-05-30 00:42:57.160210 | orchestrator |  "osd_lvm_uuid": "749c70bc-bf8f-56a3-a425-711d4530659c" 2025-05-30 00:42:57.160898 | orchestrator |  } 2025-05-30 00:42:57.161852 | orchestrator |  } 2025-05-30 00:42:57.162115 | orchestrator | } 2025-05-30 00:42:57.162345 | orchestrator | 2025-05-30 00:42:57.162737 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-30 00:42:57.163194 | orchestrator | Friday 30 May 2025 00:42:57 +0000 (0:00:00.141) 0:00:24.394 ************ 2025-05-30 00:42:57.290423 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:57.290533 | orchestrator | 2025-05-30 00:42:57.291954 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-30 00:42:57.295229 | orchestrator | Friday 30 May 2025 00:42:57 +0000 (0:00:00.132) 0:00:24.527 ************ 2025-05-30 00:42:57.422542 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:57.422681 | orchestrator | 2025-05-30 00:42:57.422699 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-30 00:42:57.424819 | orchestrator | Friday 30 May 2025 00:42:57 +0000 (0:00:00.132) 0:00:24.659 ************ 2025-05-30 00:42:57.556562 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:42:57.556913 | orchestrator | 2025-05-30 00:42:57.557258 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-30 00:42:57.558380 | orchestrator | Friday 30 May 2025 00:42:57 +0000 (0:00:00.136) 0:00:24.795 ************ 2025-05-30 00:42:57.980584 | orchestrator | changed: [testbed-node-4] => { 2025-05-30 00:42:57.981724 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-30 00:42:57.983204 | orchestrator |  "ceph_osd_devices": { 2025-05-30 00:42:57.984351 | orchestrator |  "sdb": { 2025-05-30 00:42:57.985758 | orchestrator |  "osd_lvm_uuid": "50b3064c-7478-543e-8abf-661fdbdc95ce" 2025-05-30 00:42:57.986610 | orchestrator |  }, 2025-05-30 00:42:57.987290 | orchestrator |  "sdc": { 2025-05-30 00:42:57.988101 | orchestrator |  "osd_lvm_uuid": "749c70bc-bf8f-56a3-a425-711d4530659c" 2025-05-30 00:42:57.988773 | orchestrator |  } 2025-05-30 00:42:57.989799 | orchestrator |  }, 2025-05-30 00:42:57.989821 | orchestrator |  "lvm_volumes": [ 2025-05-30 00:42:57.990388 | orchestrator |  { 2025-05-30 00:42:57.990591 | orchestrator |  "data": "osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce", 2025-05-30 00:42:57.991019 | orchestrator |  "data_vg": "ceph-50b3064c-7478-543e-8abf-661fdbdc95ce" 2025-05-30 00:42:57.991359 | orchestrator |  }, 2025-05-30 00:42:57.991847 | orchestrator |  { 2025-05-30 00:42:57.992258 | orchestrator |  "data": "osd-block-749c70bc-bf8f-56a3-a425-711d4530659c", 2025-05-30 00:42:57.992535 | orchestrator |  "data_vg": "ceph-749c70bc-bf8f-56a3-a425-711d4530659c" 2025-05-30 00:42:57.993063 | orchestrator |  } 2025-05-30 00:42:57.994401 | orchestrator |  ] 2025-05-30 00:42:57.995242 | orchestrator |  } 2025-05-30 00:42:57.995527 | orchestrator | } 2025-05-30 00:42:57.996559 | orchestrator | 2025-05-30 00:42:57.996587 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-30 00:42:57.996599 | orchestrator | Friday 30 May 2025 00:42:57 +0000 (0:00:00.421) 0:00:25.217 ************ 2025-05-30 00:42:59.339851 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-30 00:42:59.340241 | orchestrator | 2025-05-30 00:42:59.341009 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-30 00:42:59.341783 | orchestrator | 2025-05-30 00:42:59.342459 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-30 00:42:59.343161 | orchestrator | Friday 30 May 2025 00:42:59 +0000 (0:00:01.359) 0:00:26.577 ************ 2025-05-30 00:42:59.581189 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-30 00:42:59.581618 | orchestrator | 2025-05-30 00:42:59.583224 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-30 00:42:59.585492 | orchestrator | Friday 30 May 2025 00:42:59 +0000 (0:00:00.242) 0:00:26.820 ************ 2025-05-30 00:42:59.807593 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:42:59.808739 | orchestrator | 2025-05-30 00:42:59.809444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:42:59.810743 | orchestrator | Friday 30 May 2025 00:42:59 +0000 (0:00:00.226) 0:00:27.046 ************ 2025-05-30 00:43:00.342226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-30 00:43:00.344590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-30 00:43:00.344623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-30 00:43:00.345338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-30 00:43:00.346767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-30 00:43:00.347522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-30 00:43:00.348714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-30 00:43:00.349254 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-30 00:43:00.349345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-30 00:43:00.350330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-30 00:43:00.350466 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-30 00:43:00.351039 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-30 00:43:00.351286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-30 00:43:00.351974 | orchestrator | 2025-05-30 00:43:00.352833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:00.352856 | orchestrator | Friday 30 May 2025 00:43:00 +0000 (0:00:00.532) 0:00:27.579 ************ 2025-05-30 00:43:00.549753 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:00.550482 | orchestrator | 2025-05-30 00:43:00.551275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:00.552049 | orchestrator | Friday 30 May 2025 00:43:00 +0000 (0:00:00.210) 0:00:27.789 ************ 2025-05-30 00:43:00.774916 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:00.775532 | orchestrator | 2025-05-30 00:43:00.776174 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:00.777093 | orchestrator | Friday 30 May 2025 00:43:00 +0000 (0:00:00.224) 0:00:28.014 ************ 2025-05-30 00:43:00.974703 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:00.975980 | orchestrator | 2025-05-30 00:43:00.976703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:00.978003 | orchestrator | Friday 30 May 2025 00:43:00 +0000 (0:00:00.199) 0:00:28.213 ************ 2025-05-30 00:43:01.176690 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:01.176801 | orchestrator | 2025-05-30 00:43:01.177973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:01.178326 | orchestrator | Friday 30 May 2025 00:43:01 +0000 (0:00:00.198) 0:00:28.412 ************ 2025-05-30 00:43:01.367384 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:01.368000 | orchestrator | 2025-05-30 00:43:01.369019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:01.370894 | orchestrator | Friday 30 May 2025 00:43:01 +0000 (0:00:00.193) 0:00:28.605 ************ 2025-05-30 00:43:01.560296 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:01.561103 | orchestrator | 2025-05-30 00:43:01.561784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:01.562492 | orchestrator | Friday 30 May 2025 00:43:01 +0000 (0:00:00.194) 0:00:28.800 ************ 2025-05-30 00:43:01.745868 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:01.746005 | orchestrator | 2025-05-30 00:43:01.746665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:01.747573 | orchestrator | Friday 30 May 2025 00:43:01 +0000 (0:00:00.183) 0:00:28.984 ************ 2025-05-30 00:43:01.950351 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:01.950549 | orchestrator | 2025-05-30 00:43:01.951784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:01.952886 | orchestrator | Friday 30 May 2025 00:43:01 +0000 (0:00:00.205) 0:00:29.189 ************ 2025-05-30 00:43:02.545116 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f) 2025-05-30 00:43:02.546301 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f) 2025-05-30 00:43:02.548652 | orchestrator | 2025-05-30 00:43:02.548688 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:02.548704 | orchestrator | Friday 30 May 2025 00:43:02 +0000 (0:00:00.593) 0:00:29.782 ************ 2025-05-30 00:43:03.191859 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2529d57e-ffb4-494c-a22f-a2bb1703f8b2) 2025-05-30 00:43:03.192523 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2529d57e-ffb4-494c-a22f-a2bb1703f8b2) 2025-05-30 00:43:03.193287 | orchestrator | 2025-05-30 00:43:03.194214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:03.194753 | orchestrator | Friday 30 May 2025 00:43:03 +0000 (0:00:00.646) 0:00:30.429 ************ 2025-05-30 00:43:03.830191 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c7216231-2c47-48eb-b4a1-b98b10008028) 2025-05-30 00:43:03.830424 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c7216231-2c47-48eb-b4a1-b98b10008028) 2025-05-30 00:43:03.831728 | orchestrator | 2025-05-30 00:43:03.832988 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:03.833626 | orchestrator | Friday 30 May 2025 00:43:03 +0000 (0:00:00.637) 0:00:31.066 ************ 2025-05-30 00:43:04.269462 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8d1e0c18-9aac-4f03-b30e-87512c271b47) 2025-05-30 00:43:04.270160 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8d1e0c18-9aac-4f03-b30e-87512c271b47) 2025-05-30 00:43:04.270825 | orchestrator | 2025-05-30 00:43:04.273352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:04.273376 | orchestrator | Friday 30 May 2025 00:43:04 +0000 (0:00:00.441) 0:00:31.508 ************ 2025-05-30 00:43:04.601520 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-30 00:43:04.602352 | orchestrator | 2025-05-30 00:43:04.603003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:04.603809 | orchestrator | Friday 30 May 2025 00:43:04 +0000 (0:00:00.331) 0:00:31.839 ************ 2025-05-30 00:43:05.014202 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-30 00:43:05.015074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-30 00:43:05.015773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-30 00:43:05.016400 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-30 00:43:05.019227 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-30 00:43:05.019579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-30 00:43:05.019671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-30 00:43:05.019685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-30 00:43:05.019697 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-30 00:43:05.020795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-30 00:43:05.021031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-30 00:43:05.021307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-30 00:43:05.022147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-30 00:43:05.022454 | orchestrator | 2025-05-30 00:43:05.022861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:05.023348 | orchestrator | Friday 30 May 2025 00:43:05 +0000 (0:00:00.413) 0:00:32.253 ************ 2025-05-30 00:43:05.225829 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:05.226672 | orchestrator | 2025-05-30 00:43:05.230137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:05.230916 | orchestrator | Friday 30 May 2025 00:43:05 +0000 (0:00:00.210) 0:00:32.464 ************ 2025-05-30 00:43:05.425916 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:05.426824 | orchestrator | 2025-05-30 00:43:05.427974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:05.430475 | orchestrator | Friday 30 May 2025 00:43:05 +0000 (0:00:00.200) 0:00:32.665 ************ 2025-05-30 00:43:05.645406 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:05.645866 | orchestrator | 2025-05-30 00:43:05.647290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:05.647992 | orchestrator | Friday 30 May 2025 00:43:05 +0000 (0:00:00.219) 0:00:32.884 ************ 2025-05-30 00:43:05.828911 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:05.829363 | orchestrator | 2025-05-30 00:43:05.830144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:05.830859 | orchestrator | Friday 30 May 2025 00:43:05 +0000 (0:00:00.183) 0:00:33.068 ************ 2025-05-30 00:43:06.056138 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:06.057429 | orchestrator | 2025-05-30 00:43:06.058404 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:06.064354 | orchestrator | Friday 30 May 2025 00:43:06 +0000 (0:00:00.226) 0:00:33.294 ************ 2025-05-30 00:43:06.656870 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:06.657256 | orchestrator | 2025-05-30 00:43:06.658313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:06.659825 | orchestrator | Friday 30 May 2025 00:43:06 +0000 (0:00:00.601) 0:00:33.896 ************ 2025-05-30 00:43:06.872834 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:06.873164 | orchestrator | 2025-05-30 00:43:06.874088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:06.874932 | orchestrator | Friday 30 May 2025 00:43:06 +0000 (0:00:00.216) 0:00:34.112 ************ 2025-05-30 00:43:07.088136 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:07.089618 | orchestrator | 2025-05-30 00:43:07.092616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:07.092672 | orchestrator | Friday 30 May 2025 00:43:07 +0000 (0:00:00.213) 0:00:34.326 ************ 2025-05-30 00:43:07.714960 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-30 00:43:07.716187 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-30 00:43:07.718681 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-30 00:43:07.718727 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-30 00:43:07.718739 | orchestrator | 2025-05-30 00:43:07.718749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:07.719747 | orchestrator | Friday 30 May 2025 00:43:07 +0000 (0:00:00.626) 0:00:34.952 ************ 2025-05-30 00:43:07.914588 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:07.914714 | orchestrator | 2025-05-30 00:43:07.915384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:07.916114 | orchestrator | Friday 30 May 2025 00:43:07 +0000 (0:00:00.200) 0:00:35.153 ************ 2025-05-30 00:43:08.100479 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:08.101336 | orchestrator | 2025-05-30 00:43:08.102334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:08.102881 | orchestrator | Friday 30 May 2025 00:43:08 +0000 (0:00:00.186) 0:00:35.339 ************ 2025-05-30 00:43:08.300834 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:08.301224 | orchestrator | 2025-05-30 00:43:08.302921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:08.302933 | orchestrator | Friday 30 May 2025 00:43:08 +0000 (0:00:00.199) 0:00:35.538 ************ 2025-05-30 00:43:08.487514 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:08.488343 | orchestrator | 2025-05-30 00:43:08.489057 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-30 00:43:08.489789 | orchestrator | Friday 30 May 2025 00:43:08 +0000 (0:00:00.187) 0:00:35.726 ************ 2025-05-30 00:43:08.663708 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-30 00:43:08.664788 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-30 00:43:08.666116 | orchestrator | 2025-05-30 00:43:08.667384 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-30 00:43:08.668322 | orchestrator | Friday 30 May 2025 00:43:08 +0000 (0:00:00.176) 0:00:35.903 ************ 2025-05-30 00:43:08.804980 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:08.805304 | orchestrator | 2025-05-30 00:43:08.806242 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-30 00:43:08.807053 | orchestrator | Friday 30 May 2025 00:43:08 +0000 (0:00:00.140) 0:00:36.043 ************ 2025-05-30 00:43:08.940589 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:08.941398 | orchestrator | 2025-05-30 00:43:08.941618 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-30 00:43:08.942317 | orchestrator | Friday 30 May 2025 00:43:08 +0000 (0:00:00.136) 0:00:36.180 ************ 2025-05-30 00:43:09.288257 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:09.288459 | orchestrator | 2025-05-30 00:43:09.289141 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-30 00:43:09.289573 | orchestrator | Friday 30 May 2025 00:43:09 +0000 (0:00:00.346) 0:00:36.526 ************ 2025-05-30 00:43:09.436971 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:43:09.437082 | orchestrator | 2025-05-30 00:43:09.437155 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-30 00:43:09.437476 | orchestrator | Friday 30 May 2025 00:43:09 +0000 (0:00:00.148) 0:00:36.675 ************ 2025-05-30 00:43:09.610823 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2ff0e7ee-f669-5460-a216-2d1fc13a4a65'}}) 2025-05-30 00:43:09.611481 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dfef1ad9-1307-56b8-9770-fa52c7fc01ce'}}) 2025-05-30 00:43:09.612529 | orchestrator | 2025-05-30 00:43:09.612960 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-30 00:43:09.613749 | orchestrator | Friday 30 May 2025 00:43:09 +0000 (0:00:00.174) 0:00:36.850 ************ 2025-05-30 00:43:09.765204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2ff0e7ee-f669-5460-a216-2d1fc13a4a65'}})  2025-05-30 00:43:09.765359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dfef1ad9-1307-56b8-9770-fa52c7fc01ce'}})  2025-05-30 00:43:09.766057 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:09.766952 | orchestrator | 2025-05-30 00:43:09.769149 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-30 00:43:09.769173 | orchestrator | Friday 30 May 2025 00:43:09 +0000 (0:00:00.153) 0:00:37.003 ************ 2025-05-30 00:43:09.932272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2ff0e7ee-f669-5460-a216-2d1fc13a4a65'}})  2025-05-30 00:43:09.932769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dfef1ad9-1307-56b8-9770-fa52c7fc01ce'}})  2025-05-30 00:43:09.933010 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:09.934123 | orchestrator | 2025-05-30 00:43:09.934942 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-30 00:43:09.935860 | orchestrator | Friday 30 May 2025 00:43:09 +0000 (0:00:00.166) 0:00:37.170 ************ 2025-05-30 00:43:10.100544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2ff0e7ee-f669-5460-a216-2d1fc13a4a65'}})  2025-05-30 00:43:10.100733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dfef1ad9-1307-56b8-9770-fa52c7fc01ce'}})  2025-05-30 00:43:10.100860 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:10.100880 | orchestrator | 2025-05-30 00:43:10.101300 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-30 00:43:10.101560 | orchestrator | Friday 30 May 2025 00:43:10 +0000 (0:00:00.168) 0:00:37.339 ************ 2025-05-30 00:43:10.255197 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:43:10.255287 | orchestrator | 2025-05-30 00:43:10.255320 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-30 00:43:10.257317 | orchestrator | Friday 30 May 2025 00:43:10 +0000 (0:00:00.154) 0:00:37.493 ************ 2025-05-30 00:43:10.420236 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:43:10.420418 | orchestrator | 2025-05-30 00:43:10.420558 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-30 00:43:10.421274 | orchestrator | Friday 30 May 2025 00:43:10 +0000 (0:00:00.165) 0:00:37.658 ************ 2025-05-30 00:43:10.597611 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:10.597765 | orchestrator | 2025-05-30 00:43:10.597780 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-30 00:43:10.597793 | orchestrator | Friday 30 May 2025 00:43:10 +0000 (0:00:00.174) 0:00:37.832 ************ 2025-05-30 00:43:10.734553 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:10.736481 | orchestrator | 2025-05-30 00:43:10.736510 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-30 00:43:10.738743 | orchestrator | Friday 30 May 2025 00:43:10 +0000 (0:00:00.140) 0:00:37.973 ************ 2025-05-30 00:43:10.868180 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:10.868294 | orchestrator | 2025-05-30 00:43:10.868391 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-30 00:43:10.871164 | orchestrator | Friday 30 May 2025 00:43:10 +0000 (0:00:00.133) 0:00:38.106 ************ 2025-05-30 00:43:10.997321 | orchestrator | ok: [testbed-node-5] => { 2025-05-30 00:43:10.997906 | orchestrator |  "ceph_osd_devices": { 2025-05-30 00:43:10.999061 | orchestrator |  "sdb": { 2025-05-30 00:43:10.999908 | orchestrator |  "osd_lvm_uuid": "2ff0e7ee-f669-5460-a216-2d1fc13a4a65" 2025-05-30 00:43:11.002421 | orchestrator |  }, 2025-05-30 00:43:11.002716 | orchestrator |  "sdc": { 2025-05-30 00:43:11.002745 | orchestrator |  "osd_lvm_uuid": "dfef1ad9-1307-56b8-9770-fa52c7fc01ce" 2025-05-30 00:43:11.002758 | orchestrator |  } 2025-05-30 00:43:11.003151 | orchestrator |  } 2025-05-30 00:43:11.003722 | orchestrator | } 2025-05-30 00:43:11.004141 | orchestrator | 2025-05-30 00:43:11.004674 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-30 00:43:11.005130 | orchestrator | Friday 30 May 2025 00:43:10 +0000 (0:00:00.130) 0:00:38.236 ************ 2025-05-30 00:43:11.329494 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:11.329981 | orchestrator | 2025-05-30 00:43:11.330451 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-30 00:43:11.331370 | orchestrator | Friday 30 May 2025 00:43:11 +0000 (0:00:00.330) 0:00:38.567 ************ 2025-05-30 00:43:11.466570 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:11.467815 | orchestrator | 2025-05-30 00:43:11.468043 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-30 00:43:11.469325 | orchestrator | Friday 30 May 2025 00:43:11 +0000 (0:00:00.137) 0:00:38.705 ************ 2025-05-30 00:43:11.606765 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:43:11.607213 | orchestrator | 2025-05-30 00:43:11.608155 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-30 00:43:11.609162 | orchestrator | Friday 30 May 2025 00:43:11 +0000 (0:00:00.140) 0:00:38.845 ************ 2025-05-30 00:43:11.902479 | orchestrator | changed: [testbed-node-5] => { 2025-05-30 00:43:11.902716 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-30 00:43:11.904167 | orchestrator |  "ceph_osd_devices": { 2025-05-30 00:43:11.906115 | orchestrator |  "sdb": { 2025-05-30 00:43:11.907352 | orchestrator |  "osd_lvm_uuid": "2ff0e7ee-f669-5460-a216-2d1fc13a4a65" 2025-05-30 00:43:11.908137 | orchestrator |  }, 2025-05-30 00:43:11.908842 | orchestrator |  "sdc": { 2025-05-30 00:43:11.909867 | orchestrator |  "osd_lvm_uuid": "dfef1ad9-1307-56b8-9770-fa52c7fc01ce" 2025-05-30 00:43:11.911030 | orchestrator |  } 2025-05-30 00:43:11.911632 | orchestrator |  }, 2025-05-30 00:43:11.912563 | orchestrator |  "lvm_volumes": [ 2025-05-30 00:43:11.912881 | orchestrator |  { 2025-05-30 00:43:11.913880 | orchestrator |  "data": "osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65", 2025-05-30 00:43:11.915035 | orchestrator |  "data_vg": "ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65" 2025-05-30 00:43:11.915107 | orchestrator |  }, 2025-05-30 00:43:11.915627 | orchestrator |  { 2025-05-30 00:43:11.916096 | orchestrator |  "data": "osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce", 2025-05-30 00:43:11.916577 | orchestrator |  "data_vg": "ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce" 2025-05-30 00:43:11.917051 | orchestrator |  } 2025-05-30 00:43:11.917694 | orchestrator |  ] 2025-05-30 00:43:11.918102 | orchestrator |  } 2025-05-30 00:43:11.918591 | orchestrator | } 2025-05-30 00:43:11.919024 | orchestrator | 2025-05-30 00:43:11.919279 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-30 00:43:11.919713 | orchestrator | Friday 30 May 2025 00:43:11 +0000 (0:00:00.296) 0:00:39.142 ************ 2025-05-30 00:43:12.999104 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-30 00:43:12.999548 | orchestrator | 2025-05-30 00:43:13.000693 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:43:13.002175 | orchestrator | 2025-05-30 00:43:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:43:13.002260 | orchestrator | 2025-05-30 00:43:12 | INFO  | Please wait and do not abort execution. 2025-05-30 00:43:13.002593 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-30 00:43:13.003514 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-30 00:43:13.005838 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-30 00:43:13.006749 | orchestrator | 2025-05-30 00:43:13.006954 | orchestrator | 2025-05-30 00:43:13.007996 | orchestrator | 2025-05-30 00:43:13.008257 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 00:43:13.008958 | orchestrator | Friday 30 May 2025 00:43:12 +0000 (0:00:01.094) 0:00:40.236 ************ 2025-05-30 00:43:13.009999 | orchestrator | =============================================================================== 2025-05-30 00:43:13.010081 | orchestrator | Write configuration file ------------------------------------------------ 4.24s 2025-05-30 00:43:13.011019 | orchestrator | Add known links to the list of available block devices ------------------ 1.30s 2025-05-30 00:43:13.011603 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2025-05-30 00:43:13.012477 | orchestrator | Print configuration data ------------------------------------------------ 0.99s 2025-05-30 00:43:13.013121 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2025-05-30 00:43:13.013934 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2025-05-30 00:43:13.014576 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-05-30 00:43:13.015247 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.73s 2025-05-30 00:43:13.015985 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.69s 2025-05-30 00:43:13.016889 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-05-30 00:43:13.018072 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-30 00:43:13.018340 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-05-30 00:43:13.019351 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-05-30 00:43:13.020055 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-05-30 00:43:13.020969 | orchestrator | Get initial list of available block devices ----------------------------- 0.61s 2025-05-30 00:43:13.021585 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-05-30 00:43:13.022135 | orchestrator | Print WAL devices ------------------------------------------------------- 0.59s 2025-05-30 00:43:13.023167 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2025-05-30 00:43:13.024351 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.58s 2025-05-30 00:43:13.024486 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.54s 2025-05-30 00:43:25.014982 | orchestrator | 2025-05-30 00:43:25 | INFO  | Task ffb7cf6b-6ce8-4e8e-b34a-d8470072a17e is running in background. Output coming soon. 2025-05-30 00:43:48.862173 | orchestrator | 2025-05-30 00:43:40 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-30 00:43:48.862297 | orchestrator | 2025-05-30 00:43:40 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-30 00:43:48.862338 | orchestrator | 2025-05-30 00:43:40 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-30 00:43:48.862362 | orchestrator | 2025-05-30 00:43:41 | INFO  | Handling group overwrites in 99-overwrite 2025-05-30 00:43:48.862382 | orchestrator | 2025-05-30 00:43:41 | INFO  | Removing group frr:children from 60-generic 2025-05-30 00:43:48.862401 | orchestrator | 2025-05-30 00:43:41 | INFO  | Removing group storage:children from 50-kolla 2025-05-30 00:43:48.862421 | orchestrator | 2025-05-30 00:43:41 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-30 00:43:48.862440 | orchestrator | 2025-05-30 00:43:41 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-30 00:43:48.862457 | orchestrator | 2025-05-30 00:43:41 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-30 00:43:48.862468 | orchestrator | 2025-05-30 00:43:41 | INFO  | Handling group overwrites in 20-roles 2025-05-30 00:43:48.862479 | orchestrator | 2025-05-30 00:43:41 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-30 00:43:48.862489 | orchestrator | 2025-05-30 00:43:41 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-30 00:43:48.862501 | orchestrator | 2025-05-30 00:43:48 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-30 00:43:50.432080 | orchestrator | 2025-05-30 00:43:50 | INFO  | Task b614a5c6-86c1-4539-a987-f9daf5f1eca1 (ceph-create-lvm-devices) was prepared for execution. 2025-05-30 00:43:50.432178 | orchestrator | 2025-05-30 00:43:50 | INFO  | It takes a moment until task b614a5c6-86c1-4539-a987-f9daf5f1eca1 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-30 00:43:53.291302 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-30 00:43:53.765754 | orchestrator | 2025-05-30 00:43:53.767159 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-30 00:43:53.767200 | orchestrator | 2025-05-30 00:43:53.767213 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-30 00:43:53.767224 | orchestrator | Friday 30 May 2025 00:43:53 +0000 (0:00:00.411) 0:00:00.411 ************ 2025-05-30 00:43:53.975361 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-30 00:43:53.975474 | orchestrator | 2025-05-30 00:43:53.975508 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-30 00:43:53.975802 | orchestrator | Friday 30 May 2025 00:43:53 +0000 (0:00:00.207) 0:00:00.619 ************ 2025-05-30 00:43:54.190009 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:43:54.190241 | orchestrator | 2025-05-30 00:43:54.191608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:54.193301 | orchestrator | Friday 30 May 2025 00:43:54 +0000 (0:00:00.216) 0:00:00.835 ************ 2025-05-30 00:43:54.899588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-30 00:43:54.900456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-30 00:43:54.901545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-30 00:43:54.902741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-30 00:43:54.903676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-30 00:43:54.904346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-30 00:43:54.905254 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-30 00:43:54.905911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-30 00:43:54.906376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-30 00:43:54.907149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-30 00:43:54.907465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-30 00:43:54.908261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-30 00:43:54.908686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-30 00:43:54.909342 | orchestrator | 2025-05-30 00:43:54.909813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:54.910166 | orchestrator | Friday 30 May 2025 00:43:54 +0000 (0:00:00.710) 0:00:01.546 ************ 2025-05-30 00:43:55.094548 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:43:55.094645 | orchestrator | 2025-05-30 00:43:55.094701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:55.097985 | orchestrator | Friday 30 May 2025 00:43:55 +0000 (0:00:00.192) 0:00:01.739 ************ 2025-05-30 00:43:55.279959 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:43:55.280071 | orchestrator | 2025-05-30 00:43:55.280098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:55.281956 | orchestrator | Friday 30 May 2025 00:43:55 +0000 (0:00:00.184) 0:00:01.923 ************ 2025-05-30 00:43:55.480916 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:43:55.481019 | orchestrator | 2025-05-30 00:43:55.481126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:55.481485 | orchestrator | Friday 30 May 2025 00:43:55 +0000 (0:00:00.204) 0:00:02.128 ************ 2025-05-30 00:43:55.685709 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:43:55.685912 | orchestrator | 2025-05-30 00:43:55.686364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:55.687265 | orchestrator | Friday 30 May 2025 00:43:55 +0000 (0:00:00.204) 0:00:02.332 ************ 2025-05-30 00:43:55.896352 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:43:55.899148 | orchestrator | 2025-05-30 00:43:55.899241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:55.899874 | orchestrator | Friday 30 May 2025 00:43:55 +0000 (0:00:00.211) 0:00:02.543 ************ 2025-05-30 00:43:56.104338 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:43:56.104969 | orchestrator | 2025-05-30 00:43:56.105823 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:56.107130 | orchestrator | Friday 30 May 2025 00:43:56 +0000 (0:00:00.206) 0:00:02.750 ************ 2025-05-30 00:43:56.293545 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:43:56.293810 | orchestrator | 2025-05-30 00:43:56.294818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:56.295582 | orchestrator | Friday 30 May 2025 00:43:56 +0000 (0:00:00.189) 0:00:02.939 ************ 2025-05-30 00:43:56.488768 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:43:56.489077 | orchestrator | 2025-05-30 00:43:56.489728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:56.490488 | orchestrator | Friday 30 May 2025 00:43:56 +0000 (0:00:00.195) 0:00:03.135 ************ 2025-05-30 00:43:57.102243 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d) 2025-05-30 00:43:57.103256 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d) 2025-05-30 00:43:57.103706 | orchestrator | 2025-05-30 00:43:57.108645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:57.108740 | orchestrator | Friday 30 May 2025 00:43:57 +0000 (0:00:00.612) 0:00:03.748 ************ 2025-05-30 00:43:57.900162 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5232ed07-4d85-4988-9bc7-7d761a8f0a42) 2025-05-30 00:43:57.900374 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5232ed07-4d85-4988-9bc7-7d761a8f0a42) 2025-05-30 00:43:57.901492 | orchestrator | 2025-05-30 00:43:57.902360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:57.903007 | orchestrator | Friday 30 May 2025 00:43:57 +0000 (0:00:00.799) 0:00:04.547 ************ 2025-05-30 00:43:58.325567 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d57cbd6a-67f1-4040-83cf-671f4c3c6a1f) 2025-05-30 00:43:58.325997 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d57cbd6a-67f1-4040-83cf-671f4c3c6a1f) 2025-05-30 00:43:58.330169 | orchestrator | 2025-05-30 00:43:58.333473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:58.335929 | orchestrator | Friday 30 May 2025 00:43:58 +0000 (0:00:00.422) 0:00:04.970 ************ 2025-05-30 00:43:58.776623 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_76f37bde-13ed-44ba-8084-a2417c9798d9) 2025-05-30 00:43:58.778143 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_76f37bde-13ed-44ba-8084-a2417c9798d9) 2025-05-30 00:43:58.778189 | orchestrator | 2025-05-30 00:43:58.778199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:43:58.778209 | orchestrator | Friday 30 May 2025 00:43:58 +0000 (0:00:00.454) 0:00:05.424 ************ 2025-05-30 00:43:59.112443 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-30 00:43:59.112867 | orchestrator | 2025-05-30 00:43:59.113416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:59.113452 | orchestrator | Friday 30 May 2025 00:43:59 +0000 (0:00:00.332) 0:00:05.757 ************ 2025-05-30 00:43:59.574904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-30 00:43:59.575098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-30 00:43:59.576269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-30 00:43:59.576636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-30 00:43:59.577397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-30 00:43:59.578240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-30 00:43:59.578916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-30 00:43:59.581754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-30 00:43:59.581780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-30 00:43:59.581791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-30 00:43:59.581803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-30 00:43:59.581813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-30 00:43:59.581824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-30 00:43:59.581867 | orchestrator | 2025-05-30 00:43:59.581944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:59.582244 | orchestrator | Friday 30 May 2025 00:43:59 +0000 (0:00:00.463) 0:00:06.220 ************ 2025-05-30 00:43:59.786441 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:43:59.787887 | orchestrator | 2025-05-30 00:43:59.787934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:59.787956 | orchestrator | Friday 30 May 2025 00:43:59 +0000 (0:00:00.213) 0:00:06.434 ************ 2025-05-30 00:43:59.973775 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:43:59.974133 | orchestrator | 2025-05-30 00:43:59.975632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:43:59.978107 | orchestrator | Friday 30 May 2025 00:43:59 +0000 (0:00:00.186) 0:00:06.620 ************ 2025-05-30 00:44:00.177148 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:00.177913 | orchestrator | 2025-05-30 00:44:00.178959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:00.179802 | orchestrator | Friday 30 May 2025 00:44:00 +0000 (0:00:00.203) 0:00:06.824 ************ 2025-05-30 00:44:00.398644 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:00.398780 | orchestrator | 2025-05-30 00:44:00.398795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:00.398808 | orchestrator | Friday 30 May 2025 00:44:00 +0000 (0:00:00.218) 0:00:07.042 ************ 2025-05-30 00:44:00.923646 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:00.924085 | orchestrator | 2025-05-30 00:44:00.925636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:00.926499 | orchestrator | Friday 30 May 2025 00:44:00 +0000 (0:00:00.527) 0:00:07.570 ************ 2025-05-30 00:44:01.133173 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:01.133336 | orchestrator | 2025-05-30 00:44:01.133790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:01.134245 | orchestrator | Friday 30 May 2025 00:44:01 +0000 (0:00:00.208) 0:00:07.779 ************ 2025-05-30 00:44:01.346786 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:01.346903 | orchestrator | 2025-05-30 00:44:01.347435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:01.347893 | orchestrator | Friday 30 May 2025 00:44:01 +0000 (0:00:00.213) 0:00:07.992 ************ 2025-05-30 00:44:01.543162 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:01.543356 | orchestrator | 2025-05-30 00:44:01.543374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:01.543459 | orchestrator | Friday 30 May 2025 00:44:01 +0000 (0:00:00.195) 0:00:08.188 ************ 2025-05-30 00:44:02.239085 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-30 00:44:02.239850 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-30 00:44:02.241057 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-30 00:44:02.241688 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-30 00:44:02.241932 | orchestrator | 2025-05-30 00:44:02.243505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:02.243533 | orchestrator | Friday 30 May 2025 00:44:02 +0000 (0:00:00.696) 0:00:08.884 ************ 2025-05-30 00:44:02.444306 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:02.444402 | orchestrator | 2025-05-30 00:44:02.444815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:02.445535 | orchestrator | Friday 30 May 2025 00:44:02 +0000 (0:00:00.205) 0:00:09.090 ************ 2025-05-30 00:44:02.651333 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:02.651433 | orchestrator | 2025-05-30 00:44:02.651506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:02.652524 | orchestrator | Friday 30 May 2025 00:44:02 +0000 (0:00:00.202) 0:00:09.293 ************ 2025-05-30 00:44:02.836879 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:02.837617 | orchestrator | 2025-05-30 00:44:02.838344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:02.839166 | orchestrator | Friday 30 May 2025 00:44:02 +0000 (0:00:00.190) 0:00:09.483 ************ 2025-05-30 00:44:03.021063 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:03.021513 | orchestrator | 2025-05-30 00:44:03.022295 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-30 00:44:03.023094 | orchestrator | Friday 30 May 2025 00:44:03 +0000 (0:00:00.183) 0:00:09.667 ************ 2025-05-30 00:44:03.152835 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:03.153305 | orchestrator | 2025-05-30 00:44:03.154072 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-30 00:44:03.154188 | orchestrator | Friday 30 May 2025 00:44:03 +0000 (0:00:00.131) 0:00:09.799 ************ 2025-05-30 00:44:03.371743 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6d0cb66e-f8af-5d02-a2d6-05303feeced3'}}) 2025-05-30 00:44:03.371850 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f43ff32d-4fc4-5ece-8353-26072ce1c913'}}) 2025-05-30 00:44:03.371990 | orchestrator | 2025-05-30 00:44:03.372287 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-30 00:44:03.374796 | orchestrator | Friday 30 May 2025 00:44:03 +0000 (0:00:00.217) 0:00:10.017 ************ 2025-05-30 00:44:05.603281 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'}) 2025-05-30 00:44:05.603372 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'}) 2025-05-30 00:44:05.603999 | orchestrator | 2025-05-30 00:44:05.605527 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-30 00:44:05.606565 | orchestrator | Friday 30 May 2025 00:44:05 +0000 (0:00:02.230) 0:00:12.247 ************ 2025-05-30 00:44:05.791595 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:05.791734 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:05.791792 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:05.791874 | orchestrator | 2025-05-30 00:44:05.792776 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-30 00:44:05.793322 | orchestrator | Friday 30 May 2025 00:44:05 +0000 (0:00:00.190) 0:00:12.438 ************ 2025-05-30 00:44:07.254622 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'}) 2025-05-30 00:44:07.254774 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'}) 2025-05-30 00:44:07.254799 | orchestrator | 2025-05-30 00:44:07.254824 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-30 00:44:07.255192 | orchestrator | Friday 30 May 2025 00:44:07 +0000 (0:00:01.458) 0:00:13.896 ************ 2025-05-30 00:44:07.404461 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:07.406152 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:07.406186 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:07.407334 | orchestrator | 2025-05-30 00:44:07.408312 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-30 00:44:07.410356 | orchestrator | Friday 30 May 2025 00:44:07 +0000 (0:00:00.154) 0:00:14.051 ************ 2025-05-30 00:44:07.543691 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:07.545374 | orchestrator | 2025-05-30 00:44:07.546202 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-30 00:44:07.547122 | orchestrator | Friday 30 May 2025 00:44:07 +0000 (0:00:00.138) 0:00:14.189 ************ 2025-05-30 00:44:07.719822 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:07.719938 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:07.721950 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:07.721985 | orchestrator | 2025-05-30 00:44:07.722111 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-30 00:44:07.723355 | orchestrator | Friday 30 May 2025 00:44:07 +0000 (0:00:00.174) 0:00:14.364 ************ 2025-05-30 00:44:07.855486 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:07.855733 | orchestrator | 2025-05-30 00:44:07.856330 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-30 00:44:07.857218 | orchestrator | Friday 30 May 2025 00:44:07 +0000 (0:00:00.136) 0:00:14.501 ************ 2025-05-30 00:44:08.025328 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:08.025895 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:08.026263 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:08.027347 | orchestrator | 2025-05-30 00:44:08.028097 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-30 00:44:08.028571 | orchestrator | Friday 30 May 2025 00:44:08 +0000 (0:00:00.168) 0:00:14.669 ************ 2025-05-30 00:44:08.157784 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:08.158204 | orchestrator | 2025-05-30 00:44:08.158894 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-30 00:44:08.160739 | orchestrator | Friday 30 May 2025 00:44:08 +0000 (0:00:00.135) 0:00:14.804 ************ 2025-05-30 00:44:08.431987 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:08.432534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:08.433785 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:08.434615 | orchestrator | 2025-05-30 00:44:08.435103 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-30 00:44:08.436000 | orchestrator | Friday 30 May 2025 00:44:08 +0000 (0:00:00.273) 0:00:15.078 ************ 2025-05-30 00:44:08.571371 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:44:08.573256 | orchestrator | 2025-05-30 00:44:08.573292 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-30 00:44:08.573347 | orchestrator | Friday 30 May 2025 00:44:08 +0000 (0:00:00.138) 0:00:15.216 ************ 2025-05-30 00:44:08.726753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:08.726918 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:08.726936 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:08.727467 | orchestrator | 2025-05-30 00:44:08.730111 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-30 00:44:08.730160 | orchestrator | Friday 30 May 2025 00:44:08 +0000 (0:00:00.155) 0:00:15.372 ************ 2025-05-30 00:44:08.881200 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:08.881425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:08.883405 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:08.884095 | orchestrator | 2025-05-30 00:44:08.884647 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-30 00:44:08.885781 | orchestrator | Friday 30 May 2025 00:44:08 +0000 (0:00:00.154) 0:00:15.527 ************ 2025-05-30 00:44:09.052176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:09.055250 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:09.055297 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:09.055312 | orchestrator | 2025-05-30 00:44:09.055324 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-30 00:44:09.055337 | orchestrator | Friday 30 May 2025 00:44:09 +0000 (0:00:00.165) 0:00:15.692 ************ 2025-05-30 00:44:09.187028 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:09.191170 | orchestrator | 2025-05-30 00:44:09.191261 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-30 00:44:09.191623 | orchestrator | Friday 30 May 2025 00:44:09 +0000 (0:00:00.140) 0:00:15.833 ************ 2025-05-30 00:44:09.336089 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:09.340697 | orchestrator | 2025-05-30 00:44:09.340825 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-30 00:44:09.341296 | orchestrator | Friday 30 May 2025 00:44:09 +0000 (0:00:00.143) 0:00:15.976 ************ 2025-05-30 00:44:09.471827 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:09.472095 | orchestrator | 2025-05-30 00:44:09.472718 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-30 00:44:09.473222 | orchestrator | Friday 30 May 2025 00:44:09 +0000 (0:00:00.142) 0:00:16.119 ************ 2025-05-30 00:44:09.626254 | orchestrator | ok: [testbed-node-3] => { 2025-05-30 00:44:09.626503 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-30 00:44:09.626526 | orchestrator | } 2025-05-30 00:44:09.626539 | orchestrator | 2025-05-30 00:44:09.626637 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-30 00:44:09.627267 | orchestrator | Friday 30 May 2025 00:44:09 +0000 (0:00:00.154) 0:00:16.274 ************ 2025-05-30 00:44:09.768967 | orchestrator | ok: [testbed-node-3] => { 2025-05-30 00:44:09.770147 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-30 00:44:09.770393 | orchestrator | } 2025-05-30 00:44:09.770562 | orchestrator | 2025-05-30 00:44:09.771741 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-30 00:44:09.772767 | orchestrator | Friday 30 May 2025 00:44:09 +0000 (0:00:00.140) 0:00:16.414 ************ 2025-05-30 00:44:09.919449 | orchestrator | ok: [testbed-node-3] => { 2025-05-30 00:44:09.920185 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-30 00:44:09.921314 | orchestrator | } 2025-05-30 00:44:09.922842 | orchestrator | 2025-05-30 00:44:09.923289 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-30 00:44:09.924610 | orchestrator | Friday 30 May 2025 00:44:09 +0000 (0:00:00.151) 0:00:16.565 ************ 2025-05-30 00:44:10.983415 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:44:10.983500 | orchestrator | 2025-05-30 00:44:10.984299 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-30 00:44:10.984932 | orchestrator | Friday 30 May 2025 00:44:10 +0000 (0:00:01.063) 0:00:17.628 ************ 2025-05-30 00:44:11.480365 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:44:11.480728 | orchestrator | 2025-05-30 00:44:11.481472 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-30 00:44:11.482120 | orchestrator | Friday 30 May 2025 00:44:11 +0000 (0:00:00.497) 0:00:18.126 ************ 2025-05-30 00:44:11.966471 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:44:11.968163 | orchestrator | 2025-05-30 00:44:11.968908 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-30 00:44:11.969559 | orchestrator | Friday 30 May 2025 00:44:11 +0000 (0:00:00.485) 0:00:18.612 ************ 2025-05-30 00:44:12.104164 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:44:12.106400 | orchestrator | 2025-05-30 00:44:12.106460 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-30 00:44:12.106481 | orchestrator | Friday 30 May 2025 00:44:12 +0000 (0:00:00.137) 0:00:18.749 ************ 2025-05-30 00:44:12.205563 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:12.206444 | orchestrator | 2025-05-30 00:44:12.206977 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-30 00:44:12.208267 | orchestrator | Friday 30 May 2025 00:44:12 +0000 (0:00:00.102) 0:00:18.852 ************ 2025-05-30 00:44:12.308976 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:12.309126 | orchestrator | 2025-05-30 00:44:12.309944 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-30 00:44:12.309967 | orchestrator | Friday 30 May 2025 00:44:12 +0000 (0:00:00.096) 0:00:18.949 ************ 2025-05-30 00:44:12.467392 | orchestrator | ok: [testbed-node-3] => { 2025-05-30 00:44:12.467556 | orchestrator |  "vgs_report": { 2025-05-30 00:44:12.468943 | orchestrator |  "vg": [] 2025-05-30 00:44:12.471869 | orchestrator |  } 2025-05-30 00:44:12.471894 | orchestrator | } 2025-05-30 00:44:12.471906 | orchestrator | 2025-05-30 00:44:12.471918 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-30 00:44:12.472574 | orchestrator | Friday 30 May 2025 00:44:12 +0000 (0:00:00.164) 0:00:19.113 ************ 2025-05-30 00:44:12.609879 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:12.613273 | orchestrator | 2025-05-30 00:44:12.613311 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-30 00:44:12.613640 | orchestrator | Friday 30 May 2025 00:44:12 +0000 (0:00:00.141) 0:00:19.255 ************ 2025-05-30 00:44:12.749949 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:12.750467 | orchestrator | 2025-05-30 00:44:12.751345 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-30 00:44:12.752365 | orchestrator | Friday 30 May 2025 00:44:12 +0000 (0:00:00.138) 0:00:19.394 ************ 2025-05-30 00:44:12.875148 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:12.875914 | orchestrator | 2025-05-30 00:44:12.876848 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-30 00:44:12.877543 | orchestrator | Friday 30 May 2025 00:44:12 +0000 (0:00:00.128) 0:00:19.522 ************ 2025-05-30 00:44:12.994319 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:12.994820 | orchestrator | 2025-05-30 00:44:12.995758 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-30 00:44:12.996956 | orchestrator | Friday 30 May 2025 00:44:12 +0000 (0:00:00.118) 0:00:19.641 ************ 2025-05-30 00:44:13.300176 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:13.301131 | orchestrator | 2025-05-30 00:44:13.302117 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-30 00:44:13.302798 | orchestrator | Friday 30 May 2025 00:44:13 +0000 (0:00:00.305) 0:00:19.946 ************ 2025-05-30 00:44:13.435455 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:13.435706 | orchestrator | 2025-05-30 00:44:13.436879 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-30 00:44:13.437316 | orchestrator | Friday 30 May 2025 00:44:13 +0000 (0:00:00.136) 0:00:20.082 ************ 2025-05-30 00:44:13.582966 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:13.583110 | orchestrator | 2025-05-30 00:44:13.584050 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-30 00:44:13.584072 | orchestrator | Friday 30 May 2025 00:44:13 +0000 (0:00:00.146) 0:00:20.229 ************ 2025-05-30 00:44:13.734957 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:13.735809 | orchestrator | 2025-05-30 00:44:13.736476 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-30 00:44:13.738975 | orchestrator | Friday 30 May 2025 00:44:13 +0000 (0:00:00.151) 0:00:20.381 ************ 2025-05-30 00:44:13.869203 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:13.869772 | orchestrator | 2025-05-30 00:44:13.870599 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-30 00:44:13.872140 | orchestrator | Friday 30 May 2025 00:44:13 +0000 (0:00:00.134) 0:00:20.515 ************ 2025-05-30 00:44:14.024732 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:14.025700 | orchestrator | 2025-05-30 00:44:14.027490 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-30 00:44:14.028213 | orchestrator | Friday 30 May 2025 00:44:14 +0000 (0:00:00.153) 0:00:20.669 ************ 2025-05-30 00:44:14.167902 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:14.168120 | orchestrator | 2025-05-30 00:44:14.169565 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-30 00:44:14.170551 | orchestrator | Friday 30 May 2025 00:44:14 +0000 (0:00:00.144) 0:00:20.814 ************ 2025-05-30 00:44:14.316374 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:14.316596 | orchestrator | 2025-05-30 00:44:14.317716 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-30 00:44:14.318275 | orchestrator | Friday 30 May 2025 00:44:14 +0000 (0:00:00.148) 0:00:20.963 ************ 2025-05-30 00:44:14.472463 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:14.473016 | orchestrator | 2025-05-30 00:44:14.473385 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-30 00:44:14.474597 | orchestrator | Friday 30 May 2025 00:44:14 +0000 (0:00:00.156) 0:00:21.119 ************ 2025-05-30 00:44:14.608584 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:14.608739 | orchestrator | 2025-05-30 00:44:14.608756 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-30 00:44:14.608860 | orchestrator | Friday 30 May 2025 00:44:14 +0000 (0:00:00.136) 0:00:21.255 ************ 2025-05-30 00:44:14.772308 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:14.772879 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:14.773619 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:14.774249 | orchestrator | 2025-05-30 00:44:14.775031 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-30 00:44:14.775303 | orchestrator | Friday 30 May 2025 00:44:14 +0000 (0:00:00.162) 0:00:21.418 ************ 2025-05-30 00:44:14.931915 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:14.932117 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:14.933029 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:14.933869 | orchestrator | 2025-05-30 00:44:14.934454 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-30 00:44:14.935339 | orchestrator | Friday 30 May 2025 00:44:14 +0000 (0:00:00.160) 0:00:21.578 ************ 2025-05-30 00:44:15.258731 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:15.258836 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:15.258852 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:15.259391 | orchestrator | 2025-05-30 00:44:15.261882 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-30 00:44:15.262492 | orchestrator | Friday 30 May 2025 00:44:15 +0000 (0:00:00.324) 0:00:21.903 ************ 2025-05-30 00:44:15.413589 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:15.413763 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:15.413796 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:15.414845 | orchestrator | 2025-05-30 00:44:15.414880 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-30 00:44:15.414895 | orchestrator | Friday 30 May 2025 00:44:15 +0000 (0:00:00.153) 0:00:22.056 ************ 2025-05-30 00:44:15.575927 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:15.576364 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:15.577588 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:15.578348 | orchestrator | 2025-05-30 00:44:15.579266 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-30 00:44:15.579754 | orchestrator | Friday 30 May 2025 00:44:15 +0000 (0:00:00.165) 0:00:22.222 ************ 2025-05-30 00:44:15.741729 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:15.742271 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:15.743095 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:15.743844 | orchestrator | 2025-05-30 00:44:15.744308 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-30 00:44:15.744822 | orchestrator | Friday 30 May 2025 00:44:15 +0000 (0:00:00.165) 0:00:22.388 ************ 2025-05-30 00:44:15.906132 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:15.906585 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:15.906619 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:15.907602 | orchestrator | 2025-05-30 00:44:15.908505 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-30 00:44:15.908887 | orchestrator | Friday 30 May 2025 00:44:15 +0000 (0:00:00.163) 0:00:22.551 ************ 2025-05-30 00:44:16.073925 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:16.076055 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:16.077933 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:16.077979 | orchestrator | 2025-05-30 00:44:16.078225 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-30 00:44:16.079260 | orchestrator | Friday 30 May 2025 00:44:16 +0000 (0:00:00.167) 0:00:22.719 ************ 2025-05-30 00:44:16.577266 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:44:16.577657 | orchestrator | 2025-05-30 00:44:16.578293 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-30 00:44:16.579056 | orchestrator | Friday 30 May 2025 00:44:16 +0000 (0:00:00.504) 0:00:23.224 ************ 2025-05-30 00:44:17.105714 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:44:17.106310 | orchestrator | 2025-05-30 00:44:17.107713 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-30 00:44:17.108114 | orchestrator | Friday 30 May 2025 00:44:17 +0000 (0:00:00.526) 0:00:23.750 ************ 2025-05-30 00:44:17.254598 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:44:17.255508 | orchestrator | 2025-05-30 00:44:17.256908 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-30 00:44:17.257434 | orchestrator | Friday 30 May 2025 00:44:17 +0000 (0:00:00.149) 0:00:23.900 ************ 2025-05-30 00:44:17.444158 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'vg_name': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'}) 2025-05-30 00:44:17.445324 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'vg_name': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'}) 2025-05-30 00:44:17.445573 | orchestrator | 2025-05-30 00:44:17.446584 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-30 00:44:17.447366 | orchestrator | Friday 30 May 2025 00:44:17 +0000 (0:00:00.190) 0:00:24.091 ************ 2025-05-30 00:44:17.789757 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:17.790765 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:17.790966 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:17.792459 | orchestrator | 2025-05-30 00:44:17.794977 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-30 00:44:17.795241 | orchestrator | Friday 30 May 2025 00:44:17 +0000 (0:00:00.345) 0:00:24.436 ************ 2025-05-30 00:44:17.963971 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:17.965074 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:17.966007 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:17.967353 | orchestrator | 2025-05-30 00:44:17.967822 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-30 00:44:17.968566 | orchestrator | Friday 30 May 2025 00:44:17 +0000 (0:00:00.173) 0:00:24.610 ************ 2025-05-30 00:44:18.122317 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'})  2025-05-30 00:44:18.122614 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'})  2025-05-30 00:44:18.123209 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:44:18.123981 | orchestrator | 2025-05-30 00:44:18.124896 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-30 00:44:18.126951 | orchestrator | Friday 30 May 2025 00:44:18 +0000 (0:00:00.158) 0:00:24.768 ************ 2025-05-30 00:44:18.792727 | orchestrator | ok: [testbed-node-3] => { 2025-05-30 00:44:18.792875 | orchestrator |  "lvm_report": { 2025-05-30 00:44:18.793274 | orchestrator |  "lv": [ 2025-05-30 00:44:18.794004 | orchestrator |  { 2025-05-30 00:44:18.794201 | orchestrator |  "lv_name": "osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3", 2025-05-30 00:44:18.794773 | orchestrator |  "vg_name": "ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3" 2025-05-30 00:44:18.796494 | orchestrator |  }, 2025-05-30 00:44:18.796525 | orchestrator |  { 2025-05-30 00:44:18.797273 | orchestrator |  "lv_name": "osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913", 2025-05-30 00:44:18.797372 | orchestrator |  "vg_name": "ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913" 2025-05-30 00:44:18.797830 | orchestrator |  } 2025-05-30 00:44:18.798280 | orchestrator |  ], 2025-05-30 00:44:18.798734 | orchestrator |  "pv": [ 2025-05-30 00:44:18.799301 | orchestrator |  { 2025-05-30 00:44:18.799565 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-30 00:44:18.800308 | orchestrator |  "vg_name": "ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3" 2025-05-30 00:44:18.800701 | orchestrator |  }, 2025-05-30 00:44:18.800790 | orchestrator |  { 2025-05-30 00:44:18.801277 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-30 00:44:18.801500 | orchestrator |  "vg_name": "ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913" 2025-05-30 00:44:18.801736 | orchestrator |  } 2025-05-30 00:44:18.802091 | orchestrator |  ] 2025-05-30 00:44:18.802413 | orchestrator |  } 2025-05-30 00:44:18.802648 | orchestrator | } 2025-05-30 00:44:18.802789 | orchestrator | 2025-05-30 00:44:18.803192 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-30 00:44:18.803434 | orchestrator | 2025-05-30 00:44:18.803870 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-30 00:44:18.803960 | orchestrator | Friday 30 May 2025 00:44:18 +0000 (0:00:00.670) 0:00:25.439 ************ 2025-05-30 00:44:19.390368 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-30 00:44:19.391563 | orchestrator | 2025-05-30 00:44:19.392199 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-30 00:44:19.394907 | orchestrator | Friday 30 May 2025 00:44:19 +0000 (0:00:00.597) 0:00:26.036 ************ 2025-05-30 00:44:19.612740 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:44:19.613760 | orchestrator | 2025-05-30 00:44:19.614514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:19.615605 | orchestrator | Friday 30 May 2025 00:44:19 +0000 (0:00:00.222) 0:00:26.259 ************ 2025-05-30 00:44:20.056013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-30 00:44:20.056122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-30 00:44:20.060092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-30 00:44:20.060120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-30 00:44:20.060161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-30 00:44:20.060251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-30 00:44:20.061073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-30 00:44:20.061831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-30 00:44:20.062941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-30 00:44:20.063230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-30 00:44:20.065223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-30 00:44:20.065446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-30 00:44:20.066074 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-30 00:44:20.066399 | orchestrator | 2025-05-30 00:44:20.066889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:20.067426 | orchestrator | Friday 30 May 2025 00:44:20 +0000 (0:00:00.442) 0:00:26.701 ************ 2025-05-30 00:44:20.256771 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:20.257301 | orchestrator | 2025-05-30 00:44:20.258184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:20.258812 | orchestrator | Friday 30 May 2025 00:44:20 +0000 (0:00:00.202) 0:00:26.903 ************ 2025-05-30 00:44:20.474226 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:20.474325 | orchestrator | 2025-05-30 00:44:20.474567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:20.474987 | orchestrator | Friday 30 May 2025 00:44:20 +0000 (0:00:00.217) 0:00:27.121 ************ 2025-05-30 00:44:20.671735 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:20.672252 | orchestrator | 2025-05-30 00:44:20.674499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:20.674593 | orchestrator | Friday 30 May 2025 00:44:20 +0000 (0:00:00.195) 0:00:27.316 ************ 2025-05-30 00:44:20.859077 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:20.859319 | orchestrator | 2025-05-30 00:44:20.860412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:20.861726 | orchestrator | Friday 30 May 2025 00:44:20 +0000 (0:00:00.188) 0:00:27.505 ************ 2025-05-30 00:44:21.048805 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:21.049941 | orchestrator | 2025-05-30 00:44:21.050907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:21.051874 | orchestrator | Friday 30 May 2025 00:44:21 +0000 (0:00:00.190) 0:00:27.695 ************ 2025-05-30 00:44:21.244328 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:21.244528 | orchestrator | 2025-05-30 00:44:21.245843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:21.248425 | orchestrator | Friday 30 May 2025 00:44:21 +0000 (0:00:00.195) 0:00:27.891 ************ 2025-05-30 00:44:21.442274 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:21.442903 | orchestrator | 2025-05-30 00:44:21.444034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:21.444842 | orchestrator | Friday 30 May 2025 00:44:21 +0000 (0:00:00.197) 0:00:28.088 ************ 2025-05-30 00:44:22.054409 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:22.055092 | orchestrator | 2025-05-30 00:44:22.056011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:22.056867 | orchestrator | Friday 30 May 2025 00:44:22 +0000 (0:00:00.611) 0:00:28.700 ************ 2025-05-30 00:44:22.469491 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51) 2025-05-30 00:44:22.469728 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51) 2025-05-30 00:44:22.470576 | orchestrator | 2025-05-30 00:44:22.471142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:22.471956 | orchestrator | Friday 30 May 2025 00:44:22 +0000 (0:00:00.415) 0:00:29.115 ************ 2025-05-30 00:44:22.901429 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_173bbd31-d008-4662-8aea-7cfb1ab21884) 2025-05-30 00:44:22.902209 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_173bbd31-d008-4662-8aea-7cfb1ab21884) 2025-05-30 00:44:22.905322 | orchestrator | 2025-05-30 00:44:22.905431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:22.905460 | orchestrator | Friday 30 May 2025 00:44:22 +0000 (0:00:00.431) 0:00:29.547 ************ 2025-05-30 00:44:23.324704 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fd28e93c-f7f0-4d71-9af0-3817aadd609f) 2025-05-30 00:44:23.324855 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fd28e93c-f7f0-4d71-9af0-3817aadd609f) 2025-05-30 00:44:23.325558 | orchestrator | 2025-05-30 00:44:23.326557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:23.326897 | orchestrator | Friday 30 May 2025 00:44:23 +0000 (0:00:00.423) 0:00:29.971 ************ 2025-05-30 00:44:23.771621 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fcd55a48-2b4a-45aa-bb97-767fc341b1ef) 2025-05-30 00:44:23.771917 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fcd55a48-2b4a-45aa-bb97-767fc341b1ef) 2025-05-30 00:44:23.772546 | orchestrator | 2025-05-30 00:44:23.773170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:23.773593 | orchestrator | Friday 30 May 2025 00:44:23 +0000 (0:00:00.446) 0:00:30.418 ************ 2025-05-30 00:44:24.091620 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-30 00:44:24.092438 | orchestrator | 2025-05-30 00:44:24.092816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:24.095092 | orchestrator | Friday 30 May 2025 00:44:24 +0000 (0:00:00.319) 0:00:30.737 ************ 2025-05-30 00:44:24.549572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-30 00:44:24.552180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-30 00:44:24.553302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-30 00:44:24.555123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-30 00:44:24.555396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-30 00:44:24.555760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-30 00:44:24.556301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-30 00:44:24.556597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-30 00:44:24.557071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-30 00:44:24.557248 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-30 00:44:24.557280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-30 00:44:24.557486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-30 00:44:24.557894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-30 00:44:24.558169 | orchestrator | 2025-05-30 00:44:24.560015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:24.560067 | orchestrator | Friday 30 May 2025 00:44:24 +0000 (0:00:00.457) 0:00:31.194 ************ 2025-05-30 00:44:24.749153 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:24.749523 | orchestrator | 2025-05-30 00:44:24.751215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:24.751557 | orchestrator | Friday 30 May 2025 00:44:24 +0000 (0:00:00.200) 0:00:31.395 ************ 2025-05-30 00:44:24.945037 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:24.945492 | orchestrator | 2025-05-30 00:44:24.947611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:24.949323 | orchestrator | Friday 30 May 2025 00:44:24 +0000 (0:00:00.196) 0:00:31.591 ************ 2025-05-30 00:44:25.498913 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:25.499436 | orchestrator | 2025-05-30 00:44:25.500164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:25.500987 | orchestrator | Friday 30 May 2025 00:44:25 +0000 (0:00:00.554) 0:00:32.146 ************ 2025-05-30 00:44:25.720639 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:25.720765 | orchestrator | 2025-05-30 00:44:25.722105 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:25.723169 | orchestrator | Friday 30 May 2025 00:44:25 +0000 (0:00:00.220) 0:00:32.366 ************ 2025-05-30 00:44:25.910298 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:25.910929 | orchestrator | 2025-05-30 00:44:25.911705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:25.912527 | orchestrator | Friday 30 May 2025 00:44:25 +0000 (0:00:00.190) 0:00:32.557 ************ 2025-05-30 00:44:26.115228 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:26.116696 | orchestrator | 2025-05-30 00:44:26.117832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:26.119361 | orchestrator | Friday 30 May 2025 00:44:26 +0000 (0:00:00.201) 0:00:32.758 ************ 2025-05-30 00:44:26.335626 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:26.335862 | orchestrator | 2025-05-30 00:44:26.336924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:26.338089 | orchestrator | Friday 30 May 2025 00:44:26 +0000 (0:00:00.221) 0:00:32.980 ************ 2025-05-30 00:44:26.542913 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:26.545445 | orchestrator | 2025-05-30 00:44:26.546143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:26.547430 | orchestrator | Friday 30 May 2025 00:44:26 +0000 (0:00:00.207) 0:00:33.188 ************ 2025-05-30 00:44:27.197065 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-30 00:44:27.197174 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-30 00:44:27.198522 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-30 00:44:27.199725 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-30 00:44:27.202212 | orchestrator | 2025-05-30 00:44:27.203787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:27.206483 | orchestrator | Friday 30 May 2025 00:44:27 +0000 (0:00:00.650) 0:00:33.838 ************ 2025-05-30 00:44:27.412172 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:27.412273 | orchestrator | 2025-05-30 00:44:27.414359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:27.415050 | orchestrator | Friday 30 May 2025 00:44:27 +0000 (0:00:00.216) 0:00:34.054 ************ 2025-05-30 00:44:27.607482 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:27.609815 | orchestrator | 2025-05-30 00:44:27.609849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:27.609910 | orchestrator | Friday 30 May 2025 00:44:27 +0000 (0:00:00.198) 0:00:34.253 ************ 2025-05-30 00:44:27.803838 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:27.804010 | orchestrator | 2025-05-30 00:44:27.804535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:27.805151 | orchestrator | Friday 30 May 2025 00:44:27 +0000 (0:00:00.198) 0:00:34.451 ************ 2025-05-30 00:44:28.397664 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:28.398319 | orchestrator | 2025-05-30 00:44:28.402108 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-30 00:44:28.402777 | orchestrator | Friday 30 May 2025 00:44:28 +0000 (0:00:00.591) 0:00:35.042 ************ 2025-05-30 00:44:28.538574 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:28.544390 | orchestrator | 2025-05-30 00:44:28.545113 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-30 00:44:28.545920 | orchestrator | Friday 30 May 2025 00:44:28 +0000 (0:00:00.142) 0:00:35.185 ************ 2025-05-30 00:44:28.751514 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50b3064c-7478-543e-8abf-661fdbdc95ce'}}) 2025-05-30 00:44:28.752398 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '749c70bc-bf8f-56a3-a425-711d4530659c'}}) 2025-05-30 00:44:28.753093 | orchestrator | 2025-05-30 00:44:28.753692 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-30 00:44:28.756961 | orchestrator | Friday 30 May 2025 00:44:28 +0000 (0:00:00.212) 0:00:35.398 ************ 2025-05-30 00:44:30.535607 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'}) 2025-05-30 00:44:30.536518 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'}) 2025-05-30 00:44:30.538338 | orchestrator | 2025-05-30 00:44:30.540120 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-30 00:44:30.541328 | orchestrator | Friday 30 May 2025 00:44:30 +0000 (0:00:01.782) 0:00:37.180 ************ 2025-05-30 00:44:30.722362 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:30.723384 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:30.723891 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:30.727925 | orchestrator | 2025-05-30 00:44:30.727979 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-30 00:44:30.727994 | orchestrator | Friday 30 May 2025 00:44:30 +0000 (0:00:00.187) 0:00:37.368 ************ 2025-05-30 00:44:32.022413 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'}) 2025-05-30 00:44:32.022750 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'}) 2025-05-30 00:44:32.024417 | orchestrator | 2025-05-30 00:44:32.025901 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-30 00:44:32.026502 | orchestrator | Friday 30 May 2025 00:44:32 +0000 (0:00:01.300) 0:00:38.668 ************ 2025-05-30 00:44:32.180652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:32.181169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:32.181977 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:32.182646 | orchestrator | 2025-05-30 00:44:32.185802 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-30 00:44:32.185827 | orchestrator | Friday 30 May 2025 00:44:32 +0000 (0:00:00.159) 0:00:38.827 ************ 2025-05-30 00:44:32.312368 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:32.312513 | orchestrator | 2025-05-30 00:44:32.313132 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-30 00:44:32.315138 | orchestrator | Friday 30 May 2025 00:44:32 +0000 (0:00:00.132) 0:00:38.959 ************ 2025-05-30 00:44:32.475256 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:32.475930 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:32.477638 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:32.478932 | orchestrator | 2025-05-30 00:44:32.479882 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-30 00:44:32.482332 | orchestrator | Friday 30 May 2025 00:44:32 +0000 (0:00:00.162) 0:00:39.122 ************ 2025-05-30 00:44:32.782414 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:32.782844 | orchestrator | 2025-05-30 00:44:32.787034 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-30 00:44:32.787454 | orchestrator | Friday 30 May 2025 00:44:32 +0000 (0:00:00.304) 0:00:39.427 ************ 2025-05-30 00:44:32.954095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:32.955017 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:32.956630 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:32.960730 | orchestrator | 2025-05-30 00:44:32.961130 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-30 00:44:32.962082 | orchestrator | Friday 30 May 2025 00:44:32 +0000 (0:00:00.173) 0:00:39.601 ************ 2025-05-30 00:44:33.099955 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:33.100499 | orchestrator | 2025-05-30 00:44:33.101864 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-30 00:44:33.105109 | orchestrator | Friday 30 May 2025 00:44:33 +0000 (0:00:00.145) 0:00:39.746 ************ 2025-05-30 00:44:33.265959 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:33.266748 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:33.267561 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:33.268944 | orchestrator | 2025-05-30 00:44:33.272894 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-30 00:44:33.273341 | orchestrator | Friday 30 May 2025 00:44:33 +0000 (0:00:00.166) 0:00:39.912 ************ 2025-05-30 00:44:33.408034 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:44:33.408799 | orchestrator | 2025-05-30 00:44:33.409773 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-30 00:44:33.410495 | orchestrator | Friday 30 May 2025 00:44:33 +0000 (0:00:00.142) 0:00:40.055 ************ 2025-05-30 00:44:33.566822 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:33.567439 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:33.568980 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:33.573123 | orchestrator | 2025-05-30 00:44:33.573860 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-30 00:44:33.574485 | orchestrator | Friday 30 May 2025 00:44:33 +0000 (0:00:00.158) 0:00:40.213 ************ 2025-05-30 00:44:33.750459 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:33.751529 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:33.752501 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:33.756019 | orchestrator | 2025-05-30 00:44:33.756476 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-30 00:44:33.757170 | orchestrator | Friday 30 May 2025 00:44:33 +0000 (0:00:00.183) 0:00:40.397 ************ 2025-05-30 00:44:33.910273 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:33.911005 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:33.912101 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:33.918640 | orchestrator | 2025-05-30 00:44:33.918790 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-30 00:44:33.918808 | orchestrator | Friday 30 May 2025 00:44:33 +0000 (0:00:00.159) 0:00:40.557 ************ 2025-05-30 00:44:34.053325 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:34.054422 | orchestrator | 2025-05-30 00:44:34.055269 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-30 00:44:34.056280 | orchestrator | Friday 30 May 2025 00:44:34 +0000 (0:00:00.143) 0:00:40.700 ************ 2025-05-30 00:44:34.194807 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:34.195470 | orchestrator | 2025-05-30 00:44:34.196522 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-30 00:44:34.197527 | orchestrator | Friday 30 May 2025 00:44:34 +0000 (0:00:00.141) 0:00:40.841 ************ 2025-05-30 00:44:34.334385 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:34.335588 | orchestrator | 2025-05-30 00:44:34.339839 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-30 00:44:34.341899 | orchestrator | Friday 30 May 2025 00:44:34 +0000 (0:00:00.138) 0:00:40.980 ************ 2025-05-30 00:44:34.477268 | orchestrator | ok: [testbed-node-4] => { 2025-05-30 00:44:34.478319 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-30 00:44:34.479052 | orchestrator | } 2025-05-30 00:44:34.479542 | orchestrator | 2025-05-30 00:44:34.483294 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-30 00:44:34.483380 | orchestrator | Friday 30 May 2025 00:44:34 +0000 (0:00:00.144) 0:00:41.124 ************ 2025-05-30 00:44:34.812832 | orchestrator | ok: [testbed-node-4] => { 2025-05-30 00:44:34.812988 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-30 00:44:34.813857 | orchestrator | } 2025-05-30 00:44:34.815800 | orchestrator | 2025-05-30 00:44:34.816132 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-30 00:44:34.816636 | orchestrator | Friday 30 May 2025 00:44:34 +0000 (0:00:00.333) 0:00:41.457 ************ 2025-05-30 00:44:34.957769 | orchestrator | ok: [testbed-node-4] => { 2025-05-30 00:44:34.959020 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-30 00:44:34.962543 | orchestrator | } 2025-05-30 00:44:34.963275 | orchestrator | 2025-05-30 00:44:34.963728 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-30 00:44:34.967484 | orchestrator | Friday 30 May 2025 00:44:34 +0000 (0:00:00.145) 0:00:41.603 ************ 2025-05-30 00:44:35.489442 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:44:35.490114 | orchestrator | 2025-05-30 00:44:35.491191 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-30 00:44:35.492404 | orchestrator | Friday 30 May 2025 00:44:35 +0000 (0:00:00.532) 0:00:42.136 ************ 2025-05-30 00:44:36.010174 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:44:36.010271 | orchestrator | 2025-05-30 00:44:36.010289 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-30 00:44:36.010462 | orchestrator | Friday 30 May 2025 00:44:36 +0000 (0:00:00.518) 0:00:42.654 ************ 2025-05-30 00:44:36.537737 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:44:36.538429 | orchestrator | 2025-05-30 00:44:36.538957 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-30 00:44:36.539798 | orchestrator | Friday 30 May 2025 00:44:36 +0000 (0:00:00.528) 0:00:43.183 ************ 2025-05-30 00:44:36.685240 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:44:36.685846 | orchestrator | 2025-05-30 00:44:36.686732 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-30 00:44:36.689569 | orchestrator | Friday 30 May 2025 00:44:36 +0000 (0:00:00.147) 0:00:43.331 ************ 2025-05-30 00:44:36.795718 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:36.795919 | orchestrator | 2025-05-30 00:44:36.796327 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-30 00:44:36.797193 | orchestrator | Friday 30 May 2025 00:44:36 +0000 (0:00:00.111) 0:00:43.442 ************ 2025-05-30 00:44:36.910534 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:36.910848 | orchestrator | 2025-05-30 00:44:36.911601 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-30 00:44:36.912371 | orchestrator | Friday 30 May 2025 00:44:36 +0000 (0:00:00.114) 0:00:43.557 ************ 2025-05-30 00:44:37.045307 | orchestrator | ok: [testbed-node-4] => { 2025-05-30 00:44:37.045524 | orchestrator |  "vgs_report": { 2025-05-30 00:44:37.046569 | orchestrator |  "vg": [] 2025-05-30 00:44:37.047768 | orchestrator |  } 2025-05-30 00:44:37.048262 | orchestrator | } 2025-05-30 00:44:37.049243 | orchestrator | 2025-05-30 00:44:37.050093 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-30 00:44:37.050774 | orchestrator | Friday 30 May 2025 00:44:37 +0000 (0:00:00.134) 0:00:43.691 ************ 2025-05-30 00:44:37.183209 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:37.183505 | orchestrator | 2025-05-30 00:44:37.183851 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-30 00:44:37.184485 | orchestrator | Friday 30 May 2025 00:44:37 +0000 (0:00:00.137) 0:00:43.829 ************ 2025-05-30 00:44:37.326116 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:37.326315 | orchestrator | 2025-05-30 00:44:37.326450 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-30 00:44:37.327149 | orchestrator | Friday 30 May 2025 00:44:37 +0000 (0:00:00.143) 0:00:43.972 ************ 2025-05-30 00:44:37.658577 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:37.659707 | orchestrator | 2025-05-30 00:44:37.660320 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-30 00:44:37.661391 | orchestrator | Friday 30 May 2025 00:44:37 +0000 (0:00:00.332) 0:00:44.305 ************ 2025-05-30 00:44:37.800985 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:37.801151 | orchestrator | 2025-05-30 00:44:37.801603 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-30 00:44:37.802334 | orchestrator | Friday 30 May 2025 00:44:37 +0000 (0:00:00.140) 0:00:44.446 ************ 2025-05-30 00:44:37.929275 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:37.929550 | orchestrator | 2025-05-30 00:44:37.930550 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-30 00:44:37.931451 | orchestrator | Friday 30 May 2025 00:44:37 +0000 (0:00:00.129) 0:00:44.576 ************ 2025-05-30 00:44:38.068560 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:38.068977 | orchestrator | 2025-05-30 00:44:38.069785 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-30 00:44:38.070612 | orchestrator | Friday 30 May 2025 00:44:38 +0000 (0:00:00.138) 0:00:44.714 ************ 2025-05-30 00:44:38.202629 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:38.202934 | orchestrator | 2025-05-30 00:44:38.203290 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-30 00:44:38.203871 | orchestrator | Friday 30 May 2025 00:44:38 +0000 (0:00:00.134) 0:00:44.849 ************ 2025-05-30 00:44:38.347332 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:38.347775 | orchestrator | 2025-05-30 00:44:38.348566 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-30 00:44:38.349361 | orchestrator | Friday 30 May 2025 00:44:38 +0000 (0:00:00.144) 0:00:44.994 ************ 2025-05-30 00:44:38.486297 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:38.486762 | orchestrator | 2025-05-30 00:44:38.487346 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-30 00:44:38.488250 | orchestrator | Friday 30 May 2025 00:44:38 +0000 (0:00:00.138) 0:00:45.132 ************ 2025-05-30 00:44:38.623838 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:38.626059 | orchestrator | 2025-05-30 00:44:38.627781 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-30 00:44:38.627941 | orchestrator | Friday 30 May 2025 00:44:38 +0000 (0:00:00.136) 0:00:45.269 ************ 2025-05-30 00:44:38.758275 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:38.759086 | orchestrator | 2025-05-30 00:44:38.759413 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-30 00:44:38.760220 | orchestrator | Friday 30 May 2025 00:44:38 +0000 (0:00:00.134) 0:00:45.403 ************ 2025-05-30 00:44:38.898224 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:38.898435 | orchestrator | 2025-05-30 00:44:38.899116 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-30 00:44:38.899513 | orchestrator | Friday 30 May 2025 00:44:38 +0000 (0:00:00.141) 0:00:45.545 ************ 2025-05-30 00:44:39.033351 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:39.034094 | orchestrator | 2025-05-30 00:44:39.034993 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-30 00:44:39.035876 | orchestrator | Friday 30 May 2025 00:44:39 +0000 (0:00:00.134) 0:00:45.680 ************ 2025-05-30 00:44:39.173605 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:39.173762 | orchestrator | 2025-05-30 00:44:39.174609 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-30 00:44:39.175587 | orchestrator | Friday 30 May 2025 00:44:39 +0000 (0:00:00.139) 0:00:45.820 ************ 2025-05-30 00:44:39.554330 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:39.555739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:39.556527 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:39.557288 | orchestrator | 2025-05-30 00:44:39.558102 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-30 00:44:39.558421 | orchestrator | Friday 30 May 2025 00:44:39 +0000 (0:00:00.379) 0:00:46.199 ************ 2025-05-30 00:44:39.730858 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:39.731848 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:39.732178 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:39.732898 | orchestrator | 2025-05-30 00:44:39.733547 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-30 00:44:39.734205 | orchestrator | Friday 30 May 2025 00:44:39 +0000 (0:00:00.175) 0:00:46.374 ************ 2025-05-30 00:44:39.895056 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:39.895175 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:39.895194 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:39.895926 | orchestrator | 2025-05-30 00:44:39.895953 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-30 00:44:39.895966 | orchestrator | Friday 30 May 2025 00:44:39 +0000 (0:00:00.166) 0:00:46.540 ************ 2025-05-30 00:44:40.048293 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:40.050946 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:40.051000 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:40.051015 | orchestrator | 2025-05-30 00:44:40.051046 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-30 00:44:40.051706 | orchestrator | Friday 30 May 2025 00:44:40 +0000 (0:00:00.154) 0:00:46.695 ************ 2025-05-30 00:44:40.224948 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:40.226435 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:40.226778 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:40.227156 | orchestrator | 2025-05-30 00:44:40.227777 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-30 00:44:40.228018 | orchestrator | Friday 30 May 2025 00:44:40 +0000 (0:00:00.175) 0:00:46.871 ************ 2025-05-30 00:44:40.421137 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:40.421343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:40.423391 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:40.423423 | orchestrator | 2025-05-30 00:44:40.424002 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-30 00:44:40.430235 | orchestrator | Friday 30 May 2025 00:44:40 +0000 (0:00:00.194) 0:00:47.066 ************ 2025-05-30 00:44:40.615235 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:40.615336 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:40.617067 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:40.617100 | orchestrator | 2025-05-30 00:44:40.617791 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-30 00:44:40.618182 | orchestrator | Friday 30 May 2025 00:44:40 +0000 (0:00:00.195) 0:00:47.261 ************ 2025-05-30 00:44:40.786364 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:40.787637 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:40.790628 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:40.790666 | orchestrator | 2025-05-30 00:44:40.790699 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-30 00:44:40.791007 | orchestrator | Friday 30 May 2025 00:44:40 +0000 (0:00:00.170) 0:00:47.432 ************ 2025-05-30 00:44:41.314663 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:44:41.315518 | orchestrator | 2025-05-30 00:44:41.315577 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-30 00:44:41.316519 | orchestrator | Friday 30 May 2025 00:44:41 +0000 (0:00:00.528) 0:00:47.960 ************ 2025-05-30 00:44:41.826487 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:44:41.826582 | orchestrator | 2025-05-30 00:44:41.826972 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-30 00:44:41.827837 | orchestrator | Friday 30 May 2025 00:44:41 +0000 (0:00:00.511) 0:00:48.472 ************ 2025-05-30 00:44:41.977099 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:44:41.977197 | orchestrator | 2025-05-30 00:44:41.978826 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-30 00:44:41.981347 | orchestrator | Friday 30 May 2025 00:44:41 +0000 (0:00:00.146) 0:00:48.618 ************ 2025-05-30 00:44:42.391987 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'vg_name': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'}) 2025-05-30 00:44:42.392165 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'vg_name': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'}) 2025-05-30 00:44:42.392702 | orchestrator | 2025-05-30 00:44:42.394433 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-30 00:44:42.395042 | orchestrator | Friday 30 May 2025 00:44:42 +0000 (0:00:00.419) 0:00:49.038 ************ 2025-05-30 00:44:42.564648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:42.565361 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:42.566192 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:42.567161 | orchestrator | 2025-05-30 00:44:42.568490 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-30 00:44:42.568517 | orchestrator | Friday 30 May 2025 00:44:42 +0000 (0:00:00.170) 0:00:49.209 ************ 2025-05-30 00:44:42.729317 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:42.729790 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:42.730455 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:42.730483 | orchestrator | 2025-05-30 00:44:42.731469 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-30 00:44:42.731866 | orchestrator | Friday 30 May 2025 00:44:42 +0000 (0:00:00.166) 0:00:49.376 ************ 2025-05-30 00:44:42.893854 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'})  2025-05-30 00:44:42.895976 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'})  2025-05-30 00:44:42.896715 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:44:42.896731 | orchestrator | 2025-05-30 00:44:42.897069 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-30 00:44:42.897727 | orchestrator | Friday 30 May 2025 00:44:42 +0000 (0:00:00.162) 0:00:49.538 ************ 2025-05-30 00:44:43.701106 | orchestrator | ok: [testbed-node-4] => { 2025-05-30 00:44:43.701343 | orchestrator |  "lvm_report": { 2025-05-30 00:44:43.702836 | orchestrator |  "lv": [ 2025-05-30 00:44:43.703543 | orchestrator |  { 2025-05-30 00:44:43.704136 | orchestrator |  "lv_name": "osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce", 2025-05-30 00:44:43.704353 | orchestrator |  "vg_name": "ceph-50b3064c-7478-543e-8abf-661fdbdc95ce" 2025-05-30 00:44:43.704750 | orchestrator |  }, 2025-05-30 00:44:43.705017 | orchestrator |  { 2025-05-30 00:44:43.706430 | orchestrator |  "lv_name": "osd-block-749c70bc-bf8f-56a3-a425-711d4530659c", 2025-05-30 00:44:43.706907 | orchestrator |  "vg_name": "ceph-749c70bc-bf8f-56a3-a425-711d4530659c" 2025-05-30 00:44:43.707098 | orchestrator |  } 2025-05-30 00:44:43.707833 | orchestrator |  ], 2025-05-30 00:44:43.708040 | orchestrator |  "pv": [ 2025-05-30 00:44:43.708221 | orchestrator |  { 2025-05-30 00:44:43.709229 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-30 00:44:43.709554 | orchestrator |  "vg_name": "ceph-50b3064c-7478-543e-8abf-661fdbdc95ce" 2025-05-30 00:44:43.710161 | orchestrator |  }, 2025-05-30 00:44:43.710435 | orchestrator |  { 2025-05-30 00:44:43.711789 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-30 00:44:43.711811 | orchestrator |  "vg_name": "ceph-749c70bc-bf8f-56a3-a425-711d4530659c" 2025-05-30 00:44:43.712510 | orchestrator |  } 2025-05-30 00:44:43.712758 | orchestrator |  ] 2025-05-30 00:44:43.713127 | orchestrator |  } 2025-05-30 00:44:43.713378 | orchestrator | } 2025-05-30 00:44:43.713969 | orchestrator | 2025-05-30 00:44:43.714298 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-30 00:44:43.714827 | orchestrator | 2025-05-30 00:44:43.715031 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-30 00:44:43.715344 | orchestrator | Friday 30 May 2025 00:44:43 +0000 (0:00:00.808) 0:00:50.347 ************ 2025-05-30 00:44:43.944790 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-30 00:44:43.944927 | orchestrator | 2025-05-30 00:44:43.945204 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-30 00:44:43.947364 | orchestrator | Friday 30 May 2025 00:44:43 +0000 (0:00:00.242) 0:00:50.589 ************ 2025-05-30 00:44:44.165614 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:44:44.165858 | orchestrator | 2025-05-30 00:44:44.167055 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:44.168151 | orchestrator | Friday 30 May 2025 00:44:44 +0000 (0:00:00.221) 0:00:50.811 ************ 2025-05-30 00:44:44.626373 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-30 00:44:44.630750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-30 00:44:44.631500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-30 00:44:44.632461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-30 00:44:44.634243 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-30 00:44:44.635056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-30 00:44:44.636223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-30 00:44:44.637518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-30 00:44:44.639283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-30 00:44:44.640151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-30 00:44:44.640570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-30 00:44:44.641294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-30 00:44:44.641652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-30 00:44:44.642900 | orchestrator | 2025-05-30 00:44:44.642925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:44.642939 | orchestrator | Friday 30 May 2025 00:44:44 +0000 (0:00:00.460) 0:00:51.272 ************ 2025-05-30 00:44:44.827651 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:44.828775 | orchestrator | 2025-05-30 00:44:44.829763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:44.832725 | orchestrator | Friday 30 May 2025 00:44:44 +0000 (0:00:00.201) 0:00:51.474 ************ 2025-05-30 00:44:45.024727 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:45.025605 | orchestrator | 2025-05-30 00:44:45.026297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:45.027064 | orchestrator | Friday 30 May 2025 00:44:45 +0000 (0:00:00.197) 0:00:51.671 ************ 2025-05-30 00:44:45.220640 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:45.220775 | orchestrator | 2025-05-30 00:44:45.221968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:45.221992 | orchestrator | Friday 30 May 2025 00:44:45 +0000 (0:00:00.195) 0:00:51.867 ************ 2025-05-30 00:44:45.417207 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:45.417483 | orchestrator | 2025-05-30 00:44:45.418797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:45.420000 | orchestrator | Friday 30 May 2025 00:44:45 +0000 (0:00:00.197) 0:00:52.064 ************ 2025-05-30 00:44:45.616457 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:45.617236 | orchestrator | 2025-05-30 00:44:45.617270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:45.617605 | orchestrator | Friday 30 May 2025 00:44:45 +0000 (0:00:00.198) 0:00:52.263 ************ 2025-05-30 00:44:46.167312 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:46.167474 | orchestrator | 2025-05-30 00:44:46.169614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:46.169651 | orchestrator | Friday 30 May 2025 00:44:46 +0000 (0:00:00.548) 0:00:52.811 ************ 2025-05-30 00:44:46.356956 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:46.357714 | orchestrator | 2025-05-30 00:44:46.357761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:46.358174 | orchestrator | Friday 30 May 2025 00:44:46 +0000 (0:00:00.191) 0:00:53.003 ************ 2025-05-30 00:44:46.547422 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:46.547597 | orchestrator | 2025-05-30 00:44:46.548180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:46.548764 | orchestrator | Friday 30 May 2025 00:44:46 +0000 (0:00:00.190) 0:00:53.194 ************ 2025-05-30 00:44:46.964720 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f) 2025-05-30 00:44:46.965774 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f) 2025-05-30 00:44:46.966198 | orchestrator | 2025-05-30 00:44:46.967196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:46.967841 | orchestrator | Friday 30 May 2025 00:44:46 +0000 (0:00:00.415) 0:00:53.610 ************ 2025-05-30 00:44:47.397442 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2529d57e-ffb4-494c-a22f-a2bb1703f8b2) 2025-05-30 00:44:47.397533 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2529d57e-ffb4-494c-a22f-a2bb1703f8b2) 2025-05-30 00:44:47.398474 | orchestrator | 2025-05-30 00:44:47.399208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:47.402103 | orchestrator | Friday 30 May 2025 00:44:47 +0000 (0:00:00.432) 0:00:54.042 ************ 2025-05-30 00:44:47.825749 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c7216231-2c47-48eb-b4a1-b98b10008028) 2025-05-30 00:44:47.825941 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c7216231-2c47-48eb-b4a1-b98b10008028) 2025-05-30 00:44:47.827211 | orchestrator | 2025-05-30 00:44:47.828065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:47.828936 | orchestrator | Friday 30 May 2025 00:44:47 +0000 (0:00:00.430) 0:00:54.473 ************ 2025-05-30 00:44:48.263916 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8d1e0c18-9aac-4f03-b30e-87512c271b47) 2025-05-30 00:44:48.264641 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8d1e0c18-9aac-4f03-b30e-87512c271b47) 2025-05-30 00:44:48.265130 | orchestrator | 2025-05-30 00:44:48.267324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-30 00:44:48.267364 | orchestrator | Friday 30 May 2025 00:44:48 +0000 (0:00:00.436) 0:00:54.909 ************ 2025-05-30 00:44:48.594532 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-30 00:44:48.596006 | orchestrator | 2025-05-30 00:44:48.596037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:48.596295 | orchestrator | Friday 30 May 2025 00:44:48 +0000 (0:00:00.331) 0:00:55.241 ************ 2025-05-30 00:44:49.075152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-30 00:44:49.075651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-30 00:44:49.077383 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-30 00:44:49.079273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-30 00:44:49.079519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-30 00:44:49.081118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-30 00:44:49.082111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-30 00:44:49.082818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-30 00:44:49.083275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-30 00:44:49.083740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-30 00:44:49.084243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-30 00:44:49.084770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-30 00:44:49.085477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-30 00:44:49.085790 | orchestrator | 2025-05-30 00:44:49.086156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:49.086616 | orchestrator | Friday 30 May 2025 00:44:49 +0000 (0:00:00.478) 0:00:55.720 ************ 2025-05-30 00:44:49.637311 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:49.637483 | orchestrator | 2025-05-30 00:44:49.637947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:49.638570 | orchestrator | Friday 30 May 2025 00:44:49 +0000 (0:00:00.563) 0:00:56.283 ************ 2025-05-30 00:44:49.868282 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:49.868445 | orchestrator | 2025-05-30 00:44:49.869410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:49.870511 | orchestrator | Friday 30 May 2025 00:44:49 +0000 (0:00:00.231) 0:00:56.515 ************ 2025-05-30 00:44:50.064070 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:50.064292 | orchestrator | 2025-05-30 00:44:50.065238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:50.066396 | orchestrator | Friday 30 May 2025 00:44:50 +0000 (0:00:00.194) 0:00:56.709 ************ 2025-05-30 00:44:50.250302 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:50.250466 | orchestrator | 2025-05-30 00:44:50.251471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:50.252163 | orchestrator | Friday 30 May 2025 00:44:50 +0000 (0:00:00.187) 0:00:56.897 ************ 2025-05-30 00:44:50.442201 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:50.442362 | orchestrator | 2025-05-30 00:44:50.442863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:50.443204 | orchestrator | Friday 30 May 2025 00:44:50 +0000 (0:00:00.192) 0:00:57.089 ************ 2025-05-30 00:44:50.635715 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:50.636600 | orchestrator | 2025-05-30 00:44:50.637597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:50.637626 | orchestrator | Friday 30 May 2025 00:44:50 +0000 (0:00:00.192) 0:00:57.282 ************ 2025-05-30 00:44:50.829240 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:50.830846 | orchestrator | 2025-05-30 00:44:50.831561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:50.832152 | orchestrator | Friday 30 May 2025 00:44:50 +0000 (0:00:00.193) 0:00:57.475 ************ 2025-05-30 00:44:51.040345 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:51.040731 | orchestrator | 2025-05-30 00:44:51.042597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:51.042625 | orchestrator | Friday 30 May 2025 00:44:51 +0000 (0:00:00.207) 0:00:57.683 ************ 2025-05-30 00:44:51.860965 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-30 00:44:51.861302 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-30 00:44:51.862296 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-30 00:44:51.863488 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-30 00:44:51.864183 | orchestrator | 2025-05-30 00:44:51.864878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:51.865920 | orchestrator | Friday 30 May 2025 00:44:51 +0000 (0:00:00.823) 0:00:58.507 ************ 2025-05-30 00:44:52.062664 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:52.064203 | orchestrator | 2025-05-30 00:44:52.065238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:52.066201 | orchestrator | Friday 30 May 2025 00:44:52 +0000 (0:00:00.200) 0:00:58.707 ************ 2025-05-30 00:44:52.683547 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:52.683733 | orchestrator | 2025-05-30 00:44:52.683818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:52.684792 | orchestrator | Friday 30 May 2025 00:44:52 +0000 (0:00:00.621) 0:00:59.329 ************ 2025-05-30 00:44:52.878176 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:52.878280 | orchestrator | 2025-05-30 00:44:52.878296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-30 00:44:52.878309 | orchestrator | Friday 30 May 2025 00:44:52 +0000 (0:00:00.194) 0:00:59.523 ************ 2025-05-30 00:44:53.083677 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:53.083799 | orchestrator | 2025-05-30 00:44:53.084229 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-30 00:44:53.084980 | orchestrator | Friday 30 May 2025 00:44:53 +0000 (0:00:00.203) 0:00:59.727 ************ 2025-05-30 00:44:53.228019 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:53.228804 | orchestrator | 2025-05-30 00:44:53.229289 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-30 00:44:53.229791 | orchestrator | Friday 30 May 2025 00:44:53 +0000 (0:00:00.146) 0:00:59.873 ************ 2025-05-30 00:44:53.454790 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2ff0e7ee-f669-5460-a216-2d1fc13a4a65'}}) 2025-05-30 00:44:53.455157 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dfef1ad9-1307-56b8-9770-fa52c7fc01ce'}}) 2025-05-30 00:44:53.456056 | orchestrator | 2025-05-30 00:44:53.456792 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-30 00:44:53.457906 | orchestrator | Friday 30 May 2025 00:44:53 +0000 (0:00:00.226) 0:01:00.100 ************ 2025-05-30 00:44:55.363378 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'}) 2025-05-30 00:44:55.363486 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'}) 2025-05-30 00:44:55.363558 | orchestrator | 2025-05-30 00:44:55.364498 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-30 00:44:55.365192 | orchestrator | Friday 30 May 2025 00:44:55 +0000 (0:00:01.906) 0:01:02.007 ************ 2025-05-30 00:44:55.526384 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:44:55.527075 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:44:55.527941 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:55.529126 | orchestrator | 2025-05-30 00:44:55.529473 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-30 00:44:55.530259 | orchestrator | Friday 30 May 2025 00:44:55 +0000 (0:00:00.164) 0:01:02.171 ************ 2025-05-30 00:44:56.853492 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'}) 2025-05-30 00:44:56.853595 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'}) 2025-05-30 00:44:56.855467 | orchestrator | 2025-05-30 00:44:56.855513 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-30 00:44:56.855527 | orchestrator | Friday 30 May 2025 00:44:56 +0000 (0:00:01.326) 0:01:03.497 ************ 2025-05-30 00:44:57.021947 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:44:57.022186 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:44:57.023110 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:57.024488 | orchestrator | 2025-05-30 00:44:57.025922 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-30 00:44:57.026890 | orchestrator | Friday 30 May 2025 00:44:57 +0000 (0:00:00.170) 0:01:03.668 ************ 2025-05-30 00:44:57.319347 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:57.319450 | orchestrator | 2025-05-30 00:44:57.319466 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-30 00:44:57.319609 | orchestrator | Friday 30 May 2025 00:44:57 +0000 (0:00:00.297) 0:01:03.966 ************ 2025-05-30 00:44:57.500706 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:44:57.500841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:44:57.501466 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:57.502473 | orchestrator | 2025-05-30 00:44:57.505093 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-30 00:44:57.506058 | orchestrator | Friday 30 May 2025 00:44:57 +0000 (0:00:00.180) 0:01:04.146 ************ 2025-05-30 00:44:57.638582 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:57.638872 | orchestrator | 2025-05-30 00:44:57.638909 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-30 00:44:57.639341 | orchestrator | Friday 30 May 2025 00:44:57 +0000 (0:00:00.138) 0:01:04.285 ************ 2025-05-30 00:44:57.811846 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:44:57.812027 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:44:57.813723 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:57.814091 | orchestrator | 2025-05-30 00:44:57.816794 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-30 00:44:57.816846 | orchestrator | Friday 30 May 2025 00:44:57 +0000 (0:00:00.173) 0:01:04.458 ************ 2025-05-30 00:44:57.960999 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:57.961098 | orchestrator | 2025-05-30 00:44:57.961114 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-30 00:44:57.961673 | orchestrator | Friday 30 May 2025 00:44:57 +0000 (0:00:00.148) 0:01:04.607 ************ 2025-05-30 00:44:58.120275 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:44:58.120967 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:44:58.121827 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:58.122809 | orchestrator | 2025-05-30 00:44:58.125119 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-30 00:44:58.125161 | orchestrator | Friday 30 May 2025 00:44:58 +0000 (0:00:00.160) 0:01:04.767 ************ 2025-05-30 00:44:58.263759 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:44:58.264000 | orchestrator | 2025-05-30 00:44:58.266154 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-30 00:44:58.266247 | orchestrator | Friday 30 May 2025 00:44:58 +0000 (0:00:00.142) 0:01:04.910 ************ 2025-05-30 00:44:58.439460 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:44:58.439964 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:44:58.441914 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:58.441946 | orchestrator | 2025-05-30 00:44:58.442318 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-30 00:44:58.443102 | orchestrator | Friday 30 May 2025 00:44:58 +0000 (0:00:00.174) 0:01:05.084 ************ 2025-05-30 00:44:58.600034 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:44:58.600262 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:44:58.600284 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:58.600850 | orchestrator | 2025-05-30 00:44:58.601195 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-30 00:44:58.601450 | orchestrator | Friday 30 May 2025 00:44:58 +0000 (0:00:00.162) 0:01:05.247 ************ 2025-05-30 00:44:58.761348 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:44:58.762113 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:44:58.763019 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:58.764107 | orchestrator | 2025-05-30 00:44:58.764870 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-30 00:44:58.765191 | orchestrator | Friday 30 May 2025 00:44:58 +0000 (0:00:00.161) 0:01:05.408 ************ 2025-05-30 00:44:58.899762 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:58.900232 | orchestrator | 2025-05-30 00:44:58.901348 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-30 00:44:58.902409 | orchestrator | Friday 30 May 2025 00:44:58 +0000 (0:00:00.136) 0:01:05.545 ************ 2025-05-30 00:44:59.035438 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:59.035891 | orchestrator | 2025-05-30 00:44:59.036133 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-30 00:44:59.036418 | orchestrator | Friday 30 May 2025 00:44:59 +0000 (0:00:00.135) 0:01:05.681 ************ 2025-05-30 00:44:59.355932 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:44:59.356067 | orchestrator | 2025-05-30 00:44:59.356549 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-30 00:44:59.356921 | orchestrator | Friday 30 May 2025 00:44:59 +0000 (0:00:00.320) 0:01:06.002 ************ 2025-05-30 00:44:59.512391 | orchestrator | ok: [testbed-node-5] => { 2025-05-30 00:44:59.514153 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-30 00:44:59.514181 | orchestrator | } 2025-05-30 00:44:59.514194 | orchestrator | 2025-05-30 00:44:59.514208 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-30 00:44:59.514996 | orchestrator | Friday 30 May 2025 00:44:59 +0000 (0:00:00.154) 0:01:06.156 ************ 2025-05-30 00:44:59.650951 | orchestrator | ok: [testbed-node-5] => { 2025-05-30 00:44:59.651141 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-30 00:44:59.652116 | orchestrator | } 2025-05-30 00:44:59.652715 | orchestrator | 2025-05-30 00:44:59.653433 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-30 00:44:59.654354 | orchestrator | Friday 30 May 2025 00:44:59 +0000 (0:00:00.141) 0:01:06.297 ************ 2025-05-30 00:44:59.794917 | orchestrator | ok: [testbed-node-5] => { 2025-05-30 00:44:59.795202 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-30 00:44:59.795845 | orchestrator | } 2025-05-30 00:44:59.798454 | orchestrator | 2025-05-30 00:44:59.798484 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-30 00:44:59.798498 | orchestrator | Friday 30 May 2025 00:44:59 +0000 (0:00:00.142) 0:01:06.440 ************ 2025-05-30 00:45:00.313518 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:45:00.313621 | orchestrator | 2025-05-30 00:45:00.314178 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-30 00:45:00.314584 | orchestrator | Friday 30 May 2025 00:45:00 +0000 (0:00:00.519) 0:01:06.960 ************ 2025-05-30 00:45:00.826703 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:45:00.826932 | orchestrator | 2025-05-30 00:45:00.827824 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-30 00:45:00.828802 | orchestrator | Friday 30 May 2025 00:45:00 +0000 (0:00:00.511) 0:01:07.472 ************ 2025-05-30 00:45:01.316819 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:45:01.317124 | orchestrator | 2025-05-30 00:45:01.318079 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-30 00:45:01.318285 | orchestrator | Friday 30 May 2025 00:45:01 +0000 (0:00:00.491) 0:01:07.963 ************ 2025-05-30 00:45:01.461621 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:45:01.461951 | orchestrator | 2025-05-30 00:45:01.463058 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-30 00:45:01.466672 | orchestrator | Friday 30 May 2025 00:45:01 +0000 (0:00:00.144) 0:01:08.108 ************ 2025-05-30 00:45:01.572255 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:01.572437 | orchestrator | 2025-05-30 00:45:01.572857 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-30 00:45:01.573425 | orchestrator | Friday 30 May 2025 00:45:01 +0000 (0:00:00.110) 0:01:08.219 ************ 2025-05-30 00:45:01.682277 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:01.682384 | orchestrator | 2025-05-30 00:45:01.683099 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-30 00:45:01.684074 | orchestrator | Friday 30 May 2025 00:45:01 +0000 (0:00:00.109) 0:01:08.328 ************ 2025-05-30 00:45:01.833409 | orchestrator | ok: [testbed-node-5] => { 2025-05-30 00:45:01.833591 | orchestrator |  "vgs_report": { 2025-05-30 00:45:01.834246 | orchestrator |  "vg": [] 2025-05-30 00:45:01.835570 | orchestrator |  } 2025-05-30 00:45:01.838001 | orchestrator | } 2025-05-30 00:45:01.838104 | orchestrator | 2025-05-30 00:45:01.838120 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-30 00:45:01.838141 | orchestrator | Friday 30 May 2025 00:45:01 +0000 (0:00:00.151) 0:01:08.480 ************ 2025-05-30 00:45:02.122436 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:02.123261 | orchestrator | 2025-05-30 00:45:02.123862 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-30 00:45:02.124495 | orchestrator | Friday 30 May 2025 00:45:02 +0000 (0:00:00.289) 0:01:08.769 ************ 2025-05-30 00:45:02.245777 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:02.245957 | orchestrator | 2025-05-30 00:45:02.246338 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-30 00:45:02.246676 | orchestrator | Friday 30 May 2025 00:45:02 +0000 (0:00:00.124) 0:01:08.893 ************ 2025-05-30 00:45:02.372490 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:02.372590 | orchestrator | 2025-05-30 00:45:02.373361 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-30 00:45:02.373812 | orchestrator | Friday 30 May 2025 00:45:02 +0000 (0:00:00.125) 0:01:09.018 ************ 2025-05-30 00:45:02.509474 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:02.510066 | orchestrator | 2025-05-30 00:45:02.510101 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-30 00:45:02.510353 | orchestrator | Friday 30 May 2025 00:45:02 +0000 (0:00:00.134) 0:01:09.153 ************ 2025-05-30 00:45:02.637255 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:02.637559 | orchestrator | 2025-05-30 00:45:02.638203 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-30 00:45:02.639926 | orchestrator | Friday 30 May 2025 00:45:02 +0000 (0:00:00.131) 0:01:09.284 ************ 2025-05-30 00:45:02.753724 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:02.754491 | orchestrator | 2025-05-30 00:45:02.757969 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-30 00:45:02.758001 | orchestrator | Friday 30 May 2025 00:45:02 +0000 (0:00:00.116) 0:01:09.401 ************ 2025-05-30 00:45:02.870153 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:02.870252 | orchestrator | 2025-05-30 00:45:02.870777 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-30 00:45:02.872276 | orchestrator | Friday 30 May 2025 00:45:02 +0000 (0:00:00.116) 0:01:09.518 ************ 2025-05-30 00:45:03.005332 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:03.005539 | orchestrator | 2025-05-30 00:45:03.007912 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-30 00:45:03.008388 | orchestrator | Friday 30 May 2025 00:45:02 +0000 (0:00:00.134) 0:01:09.652 ************ 2025-05-30 00:45:03.126934 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:03.127589 | orchestrator | 2025-05-30 00:45:03.129104 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-30 00:45:03.129207 | orchestrator | Friday 30 May 2025 00:45:03 +0000 (0:00:00.121) 0:01:09.773 ************ 2025-05-30 00:45:03.255427 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:03.255665 | orchestrator | 2025-05-30 00:45:03.256378 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-30 00:45:03.257547 | orchestrator | Friday 30 May 2025 00:45:03 +0000 (0:00:00.128) 0:01:09.902 ************ 2025-05-30 00:45:03.387373 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:03.388413 | orchestrator | 2025-05-30 00:45:03.388447 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-30 00:45:03.388895 | orchestrator | Friday 30 May 2025 00:45:03 +0000 (0:00:00.130) 0:01:10.033 ************ 2025-05-30 00:45:03.503089 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:03.504347 | orchestrator | 2025-05-30 00:45:03.504377 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-30 00:45:03.504727 | orchestrator | Friday 30 May 2025 00:45:03 +0000 (0:00:00.115) 0:01:10.149 ************ 2025-05-30 00:45:03.789736 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:03.789902 | orchestrator | 2025-05-30 00:45:03.790136 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-30 00:45:03.790645 | orchestrator | Friday 30 May 2025 00:45:03 +0000 (0:00:00.287) 0:01:10.437 ************ 2025-05-30 00:45:03.928430 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:03.928603 | orchestrator | 2025-05-30 00:45:03.928840 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-30 00:45:03.929532 | orchestrator | Friday 30 May 2025 00:45:03 +0000 (0:00:00.138) 0:01:10.575 ************ 2025-05-30 00:45:04.080918 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:45:04.081093 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:45:04.081217 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:04.081470 | orchestrator | 2025-05-30 00:45:04.082348 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-30 00:45:04.082813 | orchestrator | Friday 30 May 2025 00:45:04 +0000 (0:00:00.148) 0:01:10.724 ************ 2025-05-30 00:45:04.210681 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:45:04.211658 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:45:04.214860 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:04.215806 | orchestrator | 2025-05-30 00:45:04.216671 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-30 00:45:04.217135 | orchestrator | Friday 30 May 2025 00:45:04 +0000 (0:00:00.132) 0:01:10.857 ************ 2025-05-30 00:45:04.358155 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:45:04.358551 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:45:04.358865 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:04.360099 | orchestrator | 2025-05-30 00:45:04.360630 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-30 00:45:04.361380 | orchestrator | Friday 30 May 2025 00:45:04 +0000 (0:00:00.148) 0:01:11.005 ************ 2025-05-30 00:45:04.514782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:45:04.514862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:45:04.515920 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:04.516249 | orchestrator | 2025-05-30 00:45:04.516864 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-30 00:45:04.516894 | orchestrator | Friday 30 May 2025 00:45:04 +0000 (0:00:00.156) 0:01:11.162 ************ 2025-05-30 00:45:04.673504 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:45:04.674066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:45:04.675035 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:04.675196 | orchestrator | 2025-05-30 00:45:04.677453 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-30 00:45:04.678058 | orchestrator | Friday 30 May 2025 00:45:04 +0000 (0:00:00.158) 0:01:11.321 ************ 2025-05-30 00:45:04.821791 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:45:04.822118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:45:04.822561 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:04.824477 | orchestrator | 2025-05-30 00:45:04.824558 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-30 00:45:04.826164 | orchestrator | Friday 30 May 2025 00:45:04 +0000 (0:00:00.146) 0:01:11.467 ************ 2025-05-30 00:45:04.990653 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:45:04.991607 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:45:04.992997 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:04.993871 | orchestrator | 2025-05-30 00:45:04.994255 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-30 00:45:04.994775 | orchestrator | Friday 30 May 2025 00:45:04 +0000 (0:00:00.169) 0:01:11.637 ************ 2025-05-30 00:45:05.161773 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:45:05.161938 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:45:05.162796 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:05.162955 | orchestrator | 2025-05-30 00:45:05.163795 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-30 00:45:05.164161 | orchestrator | Friday 30 May 2025 00:45:05 +0000 (0:00:00.170) 0:01:11.808 ************ 2025-05-30 00:45:05.674259 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:45:05.674508 | orchestrator | 2025-05-30 00:45:05.675051 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-30 00:45:05.677781 | orchestrator | Friday 30 May 2025 00:45:05 +0000 (0:00:00.511) 0:01:12.319 ************ 2025-05-30 00:45:06.196961 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:45:06.197032 | orchestrator | 2025-05-30 00:45:06.197083 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-30 00:45:06.198165 | orchestrator | Friday 30 May 2025 00:45:06 +0000 (0:00:00.523) 0:01:12.843 ************ 2025-05-30 00:45:06.356817 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:45:06.357132 | orchestrator | 2025-05-30 00:45:06.358153 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-30 00:45:06.358887 | orchestrator | Friday 30 May 2025 00:45:06 +0000 (0:00:00.159) 0:01:13.003 ************ 2025-05-30 00:45:06.536662 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'vg_name': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'}) 2025-05-30 00:45:06.537100 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'vg_name': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'}) 2025-05-30 00:45:06.538610 | orchestrator | 2025-05-30 00:45:06.541319 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-30 00:45:06.541386 | orchestrator | Friday 30 May 2025 00:45:06 +0000 (0:00:00.180) 0:01:13.183 ************ 2025-05-30 00:45:06.751614 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:45:06.752229 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:45:06.753047 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:06.755810 | orchestrator | 2025-05-30 00:45:06.755835 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-30 00:45:06.755848 | orchestrator | Friday 30 May 2025 00:45:06 +0000 (0:00:00.214) 0:01:13.398 ************ 2025-05-30 00:45:06.916765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:45:06.916970 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:45:06.917799 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:06.918569 | orchestrator | 2025-05-30 00:45:06.919013 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-30 00:45:06.920130 | orchestrator | Friday 30 May 2025 00:45:06 +0000 (0:00:00.165) 0:01:13.563 ************ 2025-05-30 00:45:07.075046 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'})  2025-05-30 00:45:07.075213 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'})  2025-05-30 00:45:07.076289 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:07.076919 | orchestrator | 2025-05-30 00:45:07.078971 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-30 00:45:07.078993 | orchestrator | Friday 30 May 2025 00:45:07 +0000 (0:00:00.157) 0:01:13.721 ************ 2025-05-30 00:45:07.476012 | orchestrator | ok: [testbed-node-5] => { 2025-05-30 00:45:07.478682 | orchestrator |  "lvm_report": { 2025-05-30 00:45:07.480108 | orchestrator |  "lv": [ 2025-05-30 00:45:07.480944 | orchestrator |  { 2025-05-30 00:45:07.481709 | orchestrator |  "lv_name": "osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65", 2025-05-30 00:45:07.482295 | orchestrator |  "vg_name": "ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65" 2025-05-30 00:45:07.483239 | orchestrator |  }, 2025-05-30 00:45:07.483932 | orchestrator |  { 2025-05-30 00:45:07.484606 | orchestrator |  "lv_name": "osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce", 2025-05-30 00:45:07.485447 | orchestrator |  "vg_name": "ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce" 2025-05-30 00:45:07.486368 | orchestrator |  } 2025-05-30 00:45:07.487228 | orchestrator |  ], 2025-05-30 00:45:07.487616 | orchestrator |  "pv": [ 2025-05-30 00:45:07.488547 | orchestrator |  { 2025-05-30 00:45:07.489550 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-30 00:45:07.489853 | orchestrator |  "vg_name": "ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65" 2025-05-30 00:45:07.490761 | orchestrator |  }, 2025-05-30 00:45:07.491221 | orchestrator |  { 2025-05-30 00:45:07.492090 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-30 00:45:07.492591 | orchestrator |  "vg_name": "ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce" 2025-05-30 00:45:07.493140 | orchestrator |  } 2025-05-30 00:45:07.493811 | orchestrator |  ] 2025-05-30 00:45:07.494458 | orchestrator |  } 2025-05-30 00:45:07.495194 | orchestrator | } 2025-05-30 00:45:07.495992 | orchestrator | 2025-05-30 00:45:07.496268 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:45:07.497144 | orchestrator | 2025-05-30 00:45:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:45:07.497168 | orchestrator | 2025-05-30 00:45:07 | INFO  | Please wait and do not abort execution. 2025-05-30 00:45:07.497425 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-30 00:45:07.498127 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-30 00:45:07.498351 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-30 00:45:07.498916 | orchestrator | 2025-05-30 00:45:07.499245 | orchestrator | 2025-05-30 00:45:07.499542 | orchestrator | 2025-05-30 00:45:07.500150 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 00:45:07.500447 | orchestrator | Friday 30 May 2025 00:45:07 +0000 (0:00:00.399) 0:01:14.120 ************ 2025-05-30 00:45:07.500833 | orchestrator | =============================================================================== 2025-05-30 00:45:07.501231 | orchestrator | Create block VGs -------------------------------------------------------- 5.92s 2025-05-30 00:45:07.501619 | orchestrator | Create block LVs -------------------------------------------------------- 4.09s 2025-05-30 00:45:07.502068 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.12s 2025-05-30 00:45:07.502421 | orchestrator | Print LVM report data --------------------------------------------------- 1.88s 2025-05-30 00:45:07.502864 | orchestrator | Add known links to the list of available block devices ------------------ 1.61s 2025-05-30 00:45:07.503019 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2025-05-30 00:45:07.503410 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2025-05-30 00:45:07.503899 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2025-05-30 00:45:07.504194 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.51s 2025-05-30 00:45:07.504626 | orchestrator | Add known partitions to the list of available block devices ------------- 1.40s 2025-05-30 00:45:07.505009 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.05s 2025-05-30 00:45:07.505492 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2025-05-30 00:45:07.505803 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-05-30 00:45:07.506173 | orchestrator | Create list of VG/LV names ---------------------------------------------- 0.79s 2025-05-30 00:45:07.506466 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.73s 2025-05-30 00:45:07.506914 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-05-30 00:45:07.507345 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.69s 2025-05-30 00:45:07.507624 | orchestrator | Get initial list of available block devices ----------------------------- 0.66s 2025-05-30 00:45:07.508089 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.66s 2025-05-30 00:45:07.508546 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-30 00:45:09.486962 | orchestrator | 2025-05-30 00:45:09 | INFO  | Task ebb85b2f-4adc-486f-b21d-914f42cb84a2 (facts) was prepared for execution. 2025-05-30 00:45:09.487055 | orchestrator | 2025-05-30 00:45:09 | INFO  | It takes a moment until task ebb85b2f-4adc-486f-b21d-914f42cb84a2 (facts) has been started and output is visible here. 2025-05-30 00:45:12.594855 | orchestrator | 2025-05-30 00:45:12.594976 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-30 00:45:12.594998 | orchestrator | 2025-05-30 00:45:12.595013 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-30 00:45:12.595021 | orchestrator | Friday 30 May 2025 00:45:12 +0000 (0:00:00.203) 0:00:00.203 ************ 2025-05-30 00:45:13.580979 | orchestrator | ok: [testbed-manager] 2025-05-30 00:45:13.581146 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:45:13.585242 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:45:13.585279 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:45:13.585291 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:45:13.585302 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:45:13.585313 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:45:13.586772 | orchestrator | 2025-05-30 00:45:13.587399 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-30 00:45:13.587682 | orchestrator | Friday 30 May 2025 00:45:13 +0000 (0:00:00.988) 0:00:01.192 ************ 2025-05-30 00:45:13.741854 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:45:13.818547 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:45:13.895785 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:45:13.974264 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:45:14.047911 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:45:14.749488 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:45:14.749651 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:14.750184 | orchestrator | 2025-05-30 00:45:14.750213 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-30 00:45:14.751993 | orchestrator | 2025-05-30 00:45:14.752239 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-30 00:45:14.752924 | orchestrator | Friday 30 May 2025 00:45:14 +0000 (0:00:01.170) 0:00:02.362 ************ 2025-05-30 00:45:20.253395 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:45:20.253572 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:45:20.254100 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:45:20.258265 | orchestrator | ok: [testbed-manager] 2025-05-30 00:45:20.258369 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:45:20.258384 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:45:20.258396 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:45:20.258408 | orchestrator | 2025-05-30 00:45:20.258420 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-30 00:45:20.258444 | orchestrator | 2025-05-30 00:45:20.258851 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-30 00:45:20.259277 | orchestrator | Friday 30 May 2025 00:45:20 +0000 (0:00:05.504) 0:00:07.867 ************ 2025-05-30 00:45:20.598426 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:45:20.672391 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:45:20.742847 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:45:20.818199 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:45:20.891260 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:45:20.931576 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:45:20.931777 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:45:20.932939 | orchestrator | 2025-05-30 00:45:20.934012 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:45:20.934145 | orchestrator | 2025-05-30 00:45:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-30 00:45:20.934922 | orchestrator | 2025-05-30 00:45:20 | INFO  | Please wait and do not abort execution. 2025-05-30 00:45:20.935840 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:45:20.936859 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:45:20.937856 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:45:20.938944 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:45:20.939059 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:45:20.940024 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:45:20.940321 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:45:20.941009 | orchestrator | 2025-05-30 00:45:20.942166 | orchestrator | Friday 30 May 2025 00:45:20 +0000 (0:00:00.676) 0:00:08.544 ************ 2025-05-30 00:45:20.942592 | orchestrator | =============================================================================== 2025-05-30 00:45:20.944214 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.50s 2025-05-30 00:45:20.944603 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.17s 2025-05-30 00:45:20.945491 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.99s 2025-05-30 00:45:20.946093 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.68s 2025-05-30 00:45:21.453667 | orchestrator | 2025-05-30 00:45:21.457646 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri May 30 00:45:21 UTC 2025 2025-05-30 00:45:21.457752 | orchestrator | 2025-05-30 00:45:22.859385 | orchestrator | 2025-05-30 00:45:22 | INFO  | Collection nutshell is prepared for execution 2025-05-30 00:45:22.859452 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [0] - dotfiles 2025-05-30 00:45:22.863961 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [0] - homer 2025-05-30 00:45:22.863979 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [0] - netdata 2025-05-30 00:45:22.863987 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [0] - openstackclient 2025-05-30 00:45:22.863995 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [0] - phpmyadmin 2025-05-30 00:45:22.864002 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [0] - common 2025-05-30 00:45:22.865200 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [1] -- loadbalancer 2025-05-30 00:45:22.865214 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [2] --- opensearch 2025-05-30 00:45:22.865221 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [2] --- mariadb-ng 2025-05-30 00:45:22.865228 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [3] ---- horizon 2025-05-30 00:45:22.865235 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [3] ---- keystone 2025-05-30 00:45:22.865334 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [4] ----- neutron 2025-05-30 00:45:22.865347 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [5] ------ wait-for-nova 2025-05-30 00:45:22.865355 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [5] ------ octavia 2025-05-30 00:45:22.865650 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [4] ----- barbican 2025-05-30 00:45:22.865663 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [4] ----- designate 2025-05-30 00:45:22.865791 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [4] ----- ironic 2025-05-30 00:45:22.865804 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [4] ----- placement 2025-05-30 00:45:22.865811 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [4] ----- magnum 2025-05-30 00:45:22.868260 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [1] -- openvswitch 2025-05-30 00:45:22.868280 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [2] --- ovn 2025-05-30 00:45:22.868288 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [1] -- memcached 2025-05-30 00:45:22.868297 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [1] -- redis 2025-05-30 00:45:22.868306 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [1] -- rabbitmq-ng 2025-05-30 00:45:22.868314 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [0] - kubernetes 2025-05-30 00:45:22.868323 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [1] -- kubeconfig 2025-05-30 00:45:22.868331 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [1] -- copy-kubeconfig 2025-05-30 00:45:22.868340 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [0] - ceph 2025-05-30 00:45:22.868349 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [1] -- ceph-pools 2025-05-30 00:45:22.868357 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [2] --- copy-ceph-keys 2025-05-30 00:45:22.868366 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [3] ---- cephclient 2025-05-30 00:45:22.868375 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-30 00:45:22.868384 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [4] ----- wait-for-keystone 2025-05-30 00:45:22.868392 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-30 00:45:22.868417 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [5] ------ glance 2025-05-30 00:45:22.868426 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [5] ------ cinder 2025-05-30 00:45:22.868435 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [5] ------ nova 2025-05-30 00:45:22.868444 | orchestrator | 2025-05-30 00:45:22 | INFO  | A [4] ----- prometheus 2025-05-30 00:45:22.868452 | orchestrator | 2025-05-30 00:45:22 | INFO  | D [5] ------ grafana 2025-05-30 00:45:23.008551 | orchestrator | 2025-05-30 00:45:23 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-30 00:45:23.008626 | orchestrator | 2025-05-30 00:45:23 | INFO  | Tasks are running in the background 2025-05-30 00:45:24.887184 | orchestrator | 2025-05-30 00:45:24 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-30 00:45:26.995913 | orchestrator | 2025-05-30 00:45:26 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:45:26.996340 | orchestrator | 2025-05-30 00:45:26 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state STARTED 2025-05-30 00:45:26.996461 | orchestrator | 2025-05-30 00:45:26 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:45:26.997065 | orchestrator | 2025-05-30 00:45:26 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:45:26.997594 | orchestrator | 2025-05-30 00:45:26 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:45:26.998170 | orchestrator | 2025-05-30 00:45:26 | INFO  | Task 021f87c6-a262-4670-adf0-6363d1a8d42b is in state STARTED 2025-05-30 00:45:26.998180 | orchestrator | 2025-05-30 00:45:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:45:30.033862 | orchestrator | 2025-05-30 00:45:30 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:45:30.034096 | orchestrator | 2025-05-30 00:45:30 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state STARTED 2025-05-30 00:45:30.034463 | orchestrator | 2025-05-30 00:45:30 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:45:30.034808 | orchestrator | 2025-05-30 00:45:30 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:45:30.035320 | orchestrator | 2025-05-30 00:45:30 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:45:30.035720 | orchestrator | 2025-05-30 00:45:30 | INFO  | Task 021f87c6-a262-4670-adf0-6363d1a8d42b is in state STARTED 2025-05-30 00:45:30.035742 | orchestrator | 2025-05-30 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:45:33.077574 | orchestrator | 2025-05-30 00:45:33 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:45:33.077690 | orchestrator | 2025-05-30 00:45:33 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state STARTED 2025-05-30 00:45:33.077878 | orchestrator | 2025-05-30 00:45:33 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:45:33.077897 | orchestrator | 2025-05-30 00:45:33 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:45:33.078483 | orchestrator | 2025-05-30 00:45:33 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:45:33.079610 | orchestrator | 2025-05-30 00:45:33 | INFO  | Task 021f87c6-a262-4670-adf0-6363d1a8d42b is in state STARTED 2025-05-30 00:45:33.079746 | orchestrator | 2025-05-30 00:45:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:45:36.131776 | orchestrator | 2025-05-30 00:45:36 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:45:36.132804 | orchestrator | 2025-05-30 00:45:36 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state STARTED 2025-05-30 00:45:36.132999 | orchestrator | 2025-05-30 00:45:36 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:45:36.136916 | orchestrator | 2025-05-30 00:45:36 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:45:36.143034 | orchestrator | 2025-05-30 00:45:36 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:45:36.143136 | orchestrator | 2025-05-30 00:45:36 | INFO  | Task 021f87c6-a262-4670-adf0-6363d1a8d42b is in state STARTED 2025-05-30 00:45:36.143150 | orchestrator | 2025-05-30 00:45:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:45:39.199927 | orchestrator | 2025-05-30 00:45:39 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:45:39.201037 | orchestrator | 2025-05-30 00:45:39 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state STARTED 2025-05-30 00:45:39.203169 | orchestrator | 2025-05-30 00:45:39 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:45:39.203875 | orchestrator | 2025-05-30 00:45:39 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:45:39.205300 | orchestrator | 2025-05-30 00:45:39 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:45:39.205605 | orchestrator | 2025-05-30 00:45:39 | INFO  | Task 021f87c6-a262-4670-adf0-6363d1a8d42b is in state STARTED 2025-05-30 00:45:39.205626 | orchestrator | 2025-05-30 00:45:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:45:42.261610 | orchestrator | 2025-05-30 00:45:42 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:45:42.261760 | orchestrator | 2025-05-30 00:45:42 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:45:42.261776 | orchestrator | 2025-05-30 00:45:42 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state STARTED 2025-05-30 00:45:42.261788 | orchestrator | 2025-05-30 00:45:42 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:45:42.265960 | orchestrator | 2025-05-30 00:45:42 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:45:42.265984 | orchestrator | 2025-05-30 00:45:42 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:45:42.265995 | orchestrator | 2025-05-30 00:45:42 | INFO  | Task 021f87c6-a262-4670-adf0-6363d1a8d42b is in state SUCCESS 2025-05-30 00:45:42.266007 | orchestrator | 2025-05-30 00:45:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:45:42.266780 | orchestrator | 2025-05-30 00:45:42.266813 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-30 00:45:42.266826 | orchestrator | 2025-05-30 00:45:42.266839 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-30 00:45:42.266853 | orchestrator | Friday 30 May 2025 00:45:30 +0000 (0:00:00.206) 0:00:00.206 ************ 2025-05-30 00:45:42.266866 | orchestrator | changed: [testbed-manager] 2025-05-30 00:45:42.266880 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:45:42.266893 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:45:42.266905 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:45:42.266916 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:45:42.266928 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:45:42.266941 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:45:42.266953 | orchestrator | 2025-05-30 00:45:42.266964 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-30 00:45:42.266993 | orchestrator | Friday 30 May 2025 00:45:33 +0000 (0:00:03.377) 0:00:03.583 ************ 2025-05-30 00:45:42.267006 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-30 00:45:42.267016 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-30 00:45:42.267027 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-30 00:45:42.267037 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-30 00:45:42.267048 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-30 00:45:42.267059 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-30 00:45:42.267069 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-30 00:45:42.267080 | orchestrator | 2025-05-30 00:45:42.267090 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-30 00:45:42.267101 | orchestrator | Friday 30 May 2025 00:45:35 +0000 (0:00:01.855) 0:00:05.439 ************ 2025-05-30 00:45:42.267117 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-30 00:45:34.602538', 'end': '2025-05-30 00:45:34.606273', 'delta': '0:00:00.003735', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-30 00:45:42.267143 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-30 00:45:34.648937', 'end': '2025-05-30 00:45:34.656996', 'delta': '0:00:00.008059', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-30 00:45:42.267156 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-30 00:45:34.879669', 'end': '2025-05-30 00:45:34.887881', 'delta': '0:00:00.008212', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-30 00:45:42.267189 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-30 00:45:34.728653', 'end': '2025-05-30 00:45:34.737819', 'delta': '0:00:00.009166', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-30 00:45:42.267208 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-30 00:45:35.035647', 'end': '2025-05-30 00:45:35.043723', 'delta': '0:00:00.008076', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-30 00:45:42.267220 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-30 00:45:35.302145', 'end': '2025-05-30 00:45:35.312035', 'delta': '0:00:00.009890', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-30 00:45:42.267235 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-30 00:45:35.415110', 'end': '2025-05-30 00:45:35.425847', 'delta': '0:00:00.010737', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-30 00:45:42.267247 | orchestrator | 2025-05-30 00:45:42.267258 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-30 00:45:42.267269 | orchestrator | Friday 30 May 2025 00:45:37 +0000 (0:00:01.775) 0:00:07.214 ************ 2025-05-30 00:45:42.267280 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-30 00:45:42.267291 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-30 00:45:42.267302 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-30 00:45:42.267313 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-30 00:45:42.267323 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-30 00:45:42.267334 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-30 00:45:42.267345 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-30 00:45:42.267356 | orchestrator | 2025-05-30 00:45:42.267366 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:45:42.267377 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:45:42.267397 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:45:42.267408 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:45:42.267425 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:45:42.267437 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:45:42.267448 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:45:42.267458 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:45:42.267469 | orchestrator | 2025-05-30 00:45:42.267480 | orchestrator | Friday 30 May 2025 00:45:40 +0000 (0:00:02.696) 0:00:09.910 ************ 2025-05-30 00:45:42.267491 | orchestrator | =============================================================================== 2025-05-30 00:45:42.267502 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.38s 2025-05-30 00:45:42.267513 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.70s 2025-05-30 00:45:42.267524 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.86s 2025-05-30 00:45:42.267535 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.78s 2025-05-30 00:45:45.319614 | orchestrator | 2025-05-30 00:45:45 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:45:45.319755 | orchestrator | 2025-05-30 00:45:45 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:45:45.319848 | orchestrator | 2025-05-30 00:45:45 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state STARTED 2025-05-30 00:45:45.323179 | orchestrator | 2025-05-30 00:45:45 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:45:45.328622 | orchestrator | 2025-05-30 00:45:45 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:45:45.330168 | orchestrator | 2025-05-30 00:45:45 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:45:45.330211 | orchestrator | 2025-05-30 00:45:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:45:48.384407 | orchestrator | 2025-05-30 00:45:48 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:45:48.387086 | orchestrator | 2025-05-30 00:45:48 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:45:48.388592 | orchestrator | 2025-05-30 00:45:48 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state STARTED 2025-05-30 00:45:48.390963 | orchestrator | 2025-05-30 00:45:48 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:45:48.393628 | orchestrator | 2025-05-30 00:45:48 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:45:48.397338 | orchestrator | 2025-05-30 00:45:48 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:45:48.397366 | orchestrator | 2025-05-30 00:45:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:45:51.453767 | orchestrator | 2025-05-30 00:45:51 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:45:51.458609 | orchestrator | 2025-05-30 00:45:51 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:45:51.464060 | orchestrator | 2025-05-30 00:45:51 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state STARTED 2025-05-30 00:45:51.466450 | orchestrator | 2025-05-30 00:45:51 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:45:51.469795 | orchestrator | 2025-05-30 00:45:51 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:45:51.473097 | orchestrator | 2025-05-30 00:45:51 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:45:51.474122 | orchestrator | 2025-05-30 00:45:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:45:54.534887 | orchestrator | 2025-05-30 00:45:54 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:45:54.538003 | orchestrator | 2025-05-30 00:45:54 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:45:54.539460 | orchestrator | 2025-05-30 00:45:54 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state STARTED 2025-05-30 00:45:54.540612 | orchestrator | 2025-05-30 00:45:54 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:45:54.542633 | orchestrator | 2025-05-30 00:45:54 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:45:54.544822 | orchestrator | 2025-05-30 00:45:54 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:45:54.544856 | orchestrator | 2025-05-30 00:45:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:45:57.604020 | orchestrator | 2025-05-30 00:45:57 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:45:57.604922 | orchestrator | 2025-05-30 00:45:57 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:45:57.607099 | orchestrator | 2025-05-30 00:45:57 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state STARTED 2025-05-30 00:45:57.609365 | orchestrator | 2025-05-30 00:45:57 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:45:57.611802 | orchestrator | 2025-05-30 00:45:57 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:45:57.613631 | orchestrator | 2025-05-30 00:45:57 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:45:57.613678 | orchestrator | 2025-05-30 00:45:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:00.667168 | orchestrator | 2025-05-30 00:46:00 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:00.669208 | orchestrator | 2025-05-30 00:46:00 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:46:00.671539 | orchestrator | 2025-05-30 00:46:00 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state STARTED 2025-05-30 00:46:00.673925 | orchestrator | 2025-05-30 00:46:00 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:00.675775 | orchestrator | 2025-05-30 00:46:00 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:46:00.678370 | orchestrator | 2025-05-30 00:46:00 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:00.678395 | orchestrator | 2025-05-30 00:46:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:03.769395 | orchestrator | 2025-05-30 00:46:03 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:03.775135 | orchestrator | 2025-05-30 00:46:03 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:46:03.775204 | orchestrator | 2025-05-30 00:46:03 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state STARTED 2025-05-30 00:46:03.778004 | orchestrator | 2025-05-30 00:46:03 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:03.778076 | orchestrator | 2025-05-30 00:46:03 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:46:03.778333 | orchestrator | 2025-05-30 00:46:03 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:03.778353 | orchestrator | 2025-05-30 00:46:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:06.853870 | orchestrator | 2025-05-30 00:46:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:06.856867 | orchestrator | 2025-05-30 00:46:06 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:06.859921 | orchestrator | 2025-05-30 00:46:06 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:46:06.859947 | orchestrator | 2025-05-30 00:46:06 | INFO  | Task 82d26080-429f-4a56-8a24-5e2143966421 is in state SUCCESS 2025-05-30 00:46:06.859959 | orchestrator | 2025-05-30 00:46:06 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:06.860456 | orchestrator | 2025-05-30 00:46:06 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:46:06.861536 | orchestrator | 2025-05-30 00:46:06 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:06.861817 | orchestrator | 2025-05-30 00:46:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:09.931202 | orchestrator | 2025-05-30 00:46:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:09.931312 | orchestrator | 2025-05-30 00:46:09 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:09.931601 | orchestrator | 2025-05-30 00:46:09 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:46:09.931623 | orchestrator | 2025-05-30 00:46:09 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:09.932430 | orchestrator | 2025-05-30 00:46:09 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:46:09.934128 | orchestrator | 2025-05-30 00:46:09 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:09.934266 | orchestrator | 2025-05-30 00:46:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:13.031392 | orchestrator | 2025-05-30 00:46:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:13.034091 | orchestrator | 2025-05-30 00:46:13 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:13.040438 | orchestrator | 2025-05-30 00:46:13 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:46:13.050414 | orchestrator | 2025-05-30 00:46:13 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:13.051695 | orchestrator | 2025-05-30 00:46:13 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:46:13.054691 | orchestrator | 2025-05-30 00:46:13 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:13.054747 | orchestrator | 2025-05-30 00:46:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:16.118421 | orchestrator | 2025-05-30 00:46:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:16.118601 | orchestrator | 2025-05-30 00:46:16 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:16.123816 | orchestrator | 2025-05-30 00:46:16 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state STARTED 2025-05-30 00:46:16.123863 | orchestrator | 2025-05-30 00:46:16 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:16.123875 | orchestrator | 2025-05-30 00:46:16 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:46:16.125750 | orchestrator | 2025-05-30 00:46:16 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:16.125775 | orchestrator | 2025-05-30 00:46:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:19.163950 | orchestrator | 2025-05-30 00:46:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:19.164078 | orchestrator | 2025-05-30 00:46:19 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:19.164187 | orchestrator | 2025-05-30 00:46:19 | INFO  | Task 9aa03750-5055-476a-a1d8-a6708f3720fe is in state SUCCESS 2025-05-30 00:46:19.166532 | orchestrator | 2025-05-30 00:46:19 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:19.167117 | orchestrator | 2025-05-30 00:46:19 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:46:19.170165 | orchestrator | 2025-05-30 00:46:19 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:19.170208 | orchestrator | 2025-05-30 00:46:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:22.207887 | orchestrator | 2025-05-30 00:46:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:22.214965 | orchestrator | 2025-05-30 00:46:22 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:22.215208 | orchestrator | 2025-05-30 00:46:22 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:22.215232 | orchestrator | 2025-05-30 00:46:22 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:46:22.216075 | orchestrator | 2025-05-30 00:46:22 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:22.216100 | orchestrator | 2025-05-30 00:46:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:25.254972 | orchestrator | 2025-05-30 00:46:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:25.274938 | orchestrator | 2025-05-30 00:46:25 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:25.275005 | orchestrator | 2025-05-30 00:46:25 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:25.275018 | orchestrator | 2025-05-30 00:46:25 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:46:25.275029 | orchestrator | 2025-05-30 00:46:25 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:25.275061 | orchestrator | 2025-05-30 00:46:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:28.316340 | orchestrator | 2025-05-30 00:46:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:28.319775 | orchestrator | 2025-05-30 00:46:28 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:28.319820 | orchestrator | 2025-05-30 00:46:28 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:28.321157 | orchestrator | 2025-05-30 00:46:28 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state STARTED 2025-05-30 00:46:28.321325 | orchestrator | 2025-05-30 00:46:28 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:28.321352 | orchestrator | 2025-05-30 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:31.357032 | orchestrator | 2025-05-30 00:46:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:31.357206 | orchestrator | 2025-05-30 00:46:31 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:31.358227 | orchestrator | 2025-05-30 00:46:31 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:31.358874 | orchestrator | 2025-05-30 00:46:31 | INFO  | Task 1f2ae5ba-61be-4e64-b5e4-4ec5ef197625 is in state SUCCESS 2025-05-30 00:46:31.359899 | orchestrator | 2025-05-30 00:46:31.359932 | orchestrator | 2025-05-30 00:46:31.359952 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-30 00:46:31.359972 | orchestrator | 2025-05-30 00:46:31.359990 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-30 00:46:31.360009 | orchestrator | Friday 30 May 2025 00:45:30 +0000 (0:00:00.130) 0:00:00.130 ************ 2025-05-30 00:46:31.360027 | orchestrator | ok: [testbed-manager] => { 2025-05-30 00:46:31.360046 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-30 00:46:31.360065 | orchestrator | } 2025-05-30 00:46:31.360084 | orchestrator | 2025-05-30 00:46:31.360101 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-30 00:46:31.360119 | orchestrator | Friday 30 May 2025 00:45:30 +0000 (0:00:00.348) 0:00:00.479 ************ 2025-05-30 00:46:31.360137 | orchestrator | ok: [testbed-manager] 2025-05-30 00:46:31.360155 | orchestrator | 2025-05-30 00:46:31.360174 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-30 00:46:31.360191 | orchestrator | Friday 30 May 2025 00:45:31 +0000 (0:00:01.111) 0:00:01.590 ************ 2025-05-30 00:46:31.360210 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-30 00:46:31.360229 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-30 00:46:31.360248 | orchestrator | 2025-05-30 00:46:31.360265 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-30 00:46:31.360277 | orchestrator | Friday 30 May 2025 00:45:32 +0000 (0:00:00.926) 0:00:02.517 ************ 2025-05-30 00:46:31.360288 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.360298 | orchestrator | 2025-05-30 00:46:31.360309 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-30 00:46:31.360320 | orchestrator | Friday 30 May 2025 00:45:35 +0000 (0:00:02.458) 0:00:04.975 ************ 2025-05-30 00:46:31.360331 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.360343 | orchestrator | 2025-05-30 00:46:31.360353 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-30 00:46:31.360364 | orchestrator | Friday 30 May 2025 00:45:36 +0000 (0:00:01.551) 0:00:06.526 ************ 2025-05-30 00:46:31.360375 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-30 00:46:31.360386 | orchestrator | ok: [testbed-manager] 2025-05-30 00:46:31.360397 | orchestrator | 2025-05-30 00:46:31.360407 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-30 00:46:31.360418 | orchestrator | Friday 30 May 2025 00:46:01 +0000 (0:00:24.862) 0:00:31.389 ************ 2025-05-30 00:46:31.360429 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.360439 | orchestrator | 2025-05-30 00:46:31.360450 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:46:31.360461 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:46:31.360492 | orchestrator | 2025-05-30 00:46:31.360506 | orchestrator | Friday 30 May 2025 00:46:03 +0000 (0:00:02.015) 0:00:33.405 ************ 2025-05-30 00:46:31.360518 | orchestrator | =============================================================================== 2025-05-30 00:46:31.360531 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.86s 2025-05-30 00:46:31.360543 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.46s 2025-05-30 00:46:31.360555 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.02s 2025-05-30 00:46:31.360568 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.55s 2025-05-30 00:46:31.360580 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.11s 2025-05-30 00:46:31.360591 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.93s 2025-05-30 00:46:31.360602 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.35s 2025-05-30 00:46:31.360613 | orchestrator | 2025-05-30 00:46:31.360624 | orchestrator | 2025-05-30 00:46:31.360634 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-30 00:46:31.360645 | orchestrator | 2025-05-30 00:46:31.360663 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-30 00:46:31.360674 | orchestrator | Friday 30 May 2025 00:45:29 +0000 (0:00:00.409) 0:00:00.409 ************ 2025-05-30 00:46:31.360686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-30 00:46:31.360697 | orchestrator | 2025-05-30 00:46:31.360708 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-30 00:46:31.360749 | orchestrator | Friday 30 May 2025 00:45:30 +0000 (0:00:00.384) 0:00:00.794 ************ 2025-05-30 00:46:31.360761 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-30 00:46:31.360772 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-30 00:46:31.360783 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-30 00:46:31.360794 | orchestrator | 2025-05-30 00:46:31.360805 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-30 00:46:31.360815 | orchestrator | Friday 30 May 2025 00:45:31 +0000 (0:00:01.291) 0:00:02.085 ************ 2025-05-30 00:46:31.360826 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.360837 | orchestrator | 2025-05-30 00:46:31.360848 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-30 00:46:31.360859 | orchestrator | Friday 30 May 2025 00:45:32 +0000 (0:00:01.248) 0:00:03.334 ************ 2025-05-30 00:46:31.360870 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-30 00:46:31.360881 | orchestrator | ok: [testbed-manager] 2025-05-30 00:46:31.360892 | orchestrator | 2025-05-30 00:46:31.360915 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-30 00:46:31.360927 | orchestrator | Friday 30 May 2025 00:46:10 +0000 (0:00:37.562) 0:00:40.897 ************ 2025-05-30 00:46:31.360938 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.360949 | orchestrator | 2025-05-30 00:46:31.360959 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-30 00:46:31.360970 | orchestrator | Friday 30 May 2025 00:46:12 +0000 (0:00:02.197) 0:00:43.094 ************ 2025-05-30 00:46:31.360981 | orchestrator | ok: [testbed-manager] 2025-05-30 00:46:31.360992 | orchestrator | 2025-05-30 00:46:31.361003 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-30 00:46:31.361014 | orchestrator | Friday 30 May 2025 00:46:14 +0000 (0:00:01.610) 0:00:44.705 ************ 2025-05-30 00:46:31.361024 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.361035 | orchestrator | 2025-05-30 00:46:31.361046 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-30 00:46:31.361063 | orchestrator | Friday 30 May 2025 00:46:16 +0000 (0:00:02.100) 0:00:46.806 ************ 2025-05-30 00:46:31.361074 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.361085 | orchestrator | 2025-05-30 00:46:31.361096 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-30 00:46:31.361107 | orchestrator | Friday 30 May 2025 00:46:16 +0000 (0:00:00.841) 0:00:47.647 ************ 2025-05-30 00:46:31.361118 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.361129 | orchestrator | 2025-05-30 00:46:31.361139 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-30 00:46:31.361150 | orchestrator | Friday 30 May 2025 00:46:17 +0000 (0:00:00.603) 0:00:48.250 ************ 2025-05-30 00:46:31.361161 | orchestrator | ok: [testbed-manager] 2025-05-30 00:46:31.361172 | orchestrator | 2025-05-30 00:46:31.361183 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:46:31.361194 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:46:31.361205 | orchestrator | 2025-05-30 00:46:31.361216 | orchestrator | Friday 30 May 2025 00:46:17 +0000 (0:00:00.388) 0:00:48.639 ************ 2025-05-30 00:46:31.361227 | orchestrator | =============================================================================== 2025-05-30 00:46:31.361237 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.56s 2025-05-30 00:46:31.361248 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.20s 2025-05-30 00:46:31.361259 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.10s 2025-05-30 00:46:31.361270 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.61s 2025-05-30 00:46:31.361281 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.29s 2025-05-30 00:46:31.361291 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.25s 2025-05-30 00:46:31.361302 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.84s 2025-05-30 00:46:31.361313 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.60s 2025-05-30 00:46:31.361324 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.39s 2025-05-30 00:46:31.361334 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.38s 2025-05-30 00:46:31.361345 | orchestrator | 2025-05-30 00:46:31.361356 | orchestrator | 2025-05-30 00:46:31.361367 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 00:46:31.361378 | orchestrator | 2025-05-30 00:46:31.361388 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 00:46:31.361399 | orchestrator | Friday 30 May 2025 00:45:31 +0000 (0:00:00.323) 0:00:00.323 ************ 2025-05-30 00:46:31.361410 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-30 00:46:31.361421 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-30 00:46:31.361432 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-30 00:46:31.361447 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-30 00:46:31.361458 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-30 00:46:31.361469 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-30 00:46:31.361479 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-30 00:46:31.361490 | orchestrator | 2025-05-30 00:46:31.361501 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-30 00:46:31.361512 | orchestrator | 2025-05-30 00:46:31.361523 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-30 00:46:31.361533 | orchestrator | Friday 30 May 2025 00:45:32 +0000 (0:00:01.000) 0:00:01.323 ************ 2025-05-30 00:46:31.361557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:46:31.361575 | orchestrator | 2025-05-30 00:46:31.361586 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-30 00:46:31.361597 | orchestrator | Friday 30 May 2025 00:45:34 +0000 (0:00:02.595) 0:00:03.918 ************ 2025-05-30 00:46:31.361608 | orchestrator | ok: [testbed-manager] 2025-05-30 00:46:31.361618 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:46:31.361629 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:46:31.361640 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:46:31.361651 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:46:31.361662 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:46:31.361673 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:46:31.361683 | orchestrator | 2025-05-30 00:46:31.361694 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-30 00:46:31.361711 | orchestrator | Friday 30 May 2025 00:45:37 +0000 (0:00:02.232) 0:00:06.150 ************ 2025-05-30 00:46:31.361743 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:46:31.361754 | orchestrator | ok: [testbed-manager] 2025-05-30 00:46:31.361765 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:46:31.361775 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:46:31.361786 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:46:31.361797 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:46:31.361807 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:46:31.361818 | orchestrator | 2025-05-30 00:46:31.361828 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-30 00:46:31.361839 | orchestrator | Friday 30 May 2025 00:45:40 +0000 (0:00:03.311) 0:00:09.462 ************ 2025-05-30 00:46:31.361850 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.361861 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:46:31.361871 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:46:31.361882 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:46:31.361892 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:46:31.361903 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:46:31.361914 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:46:31.361924 | orchestrator | 2025-05-30 00:46:31.361935 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-30 00:46:31.361946 | orchestrator | Friday 30 May 2025 00:45:42 +0000 (0:00:02.074) 0:00:11.536 ************ 2025-05-30 00:46:31.361957 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.361967 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:46:31.361978 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:46:31.361989 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:46:31.361999 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:46:31.362010 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:46:31.362086 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:46:31.362098 | orchestrator | 2025-05-30 00:46:31.362109 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-30 00:46:31.362120 | orchestrator | Friday 30 May 2025 00:45:51 +0000 (0:00:09.086) 0:00:20.623 ************ 2025-05-30 00:46:31.362130 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:46:31.362141 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:46:31.362152 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:46:31.362162 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:46:31.362173 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:46:31.362183 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:46:31.362194 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.362204 | orchestrator | 2025-05-30 00:46:31.362215 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-30 00:46:31.362226 | orchestrator | Friday 30 May 2025 00:46:08 +0000 (0:00:16.625) 0:00:37.248 ************ 2025-05-30 00:46:31.362237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:46:31.362261 | orchestrator | 2025-05-30 00:46:31.362272 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-30 00:46:31.362283 | orchestrator | Friday 30 May 2025 00:46:09 +0000 (0:00:01.719) 0:00:38.968 ************ 2025-05-30 00:46:31.362293 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-30 00:46:31.362304 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-30 00:46:31.362315 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-30 00:46:31.362325 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-30 00:46:31.362336 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-30 00:46:31.362346 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-30 00:46:31.362357 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-30 00:46:31.362368 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-30 00:46:31.362378 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-30 00:46:31.362389 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-30 00:46:31.362399 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-30 00:46:31.362410 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-30 00:46:31.362421 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-30 00:46:31.362436 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-30 00:46:31.362447 | orchestrator | 2025-05-30 00:46:31.362458 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-30 00:46:31.362469 | orchestrator | Friday 30 May 2025 00:46:16 +0000 (0:00:06.771) 0:00:45.740 ************ 2025-05-30 00:46:31.362479 | orchestrator | ok: [testbed-manager] 2025-05-30 00:46:31.362490 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:46:31.362501 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:46:31.362511 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:46:31.362522 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:46:31.362532 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:46:31.362543 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:46:31.362554 | orchestrator | 2025-05-30 00:46:31.362565 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-30 00:46:31.362576 | orchestrator | Friday 30 May 2025 00:46:17 +0000 (0:00:01.353) 0:00:47.094 ************ 2025-05-30 00:46:31.362586 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.362597 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:46:31.362607 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:46:31.362618 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:46:31.362629 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:46:31.362639 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:46:31.362650 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:46:31.362661 | orchestrator | 2025-05-30 00:46:31.362671 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-30 00:46:31.362682 | orchestrator | Friday 30 May 2025 00:46:19 +0000 (0:00:01.733) 0:00:48.827 ************ 2025-05-30 00:46:31.362693 | orchestrator | ok: [testbed-manager] 2025-05-30 00:46:31.362704 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:46:31.362750 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:46:31.362763 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:46:31.362781 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:46:31.362793 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:46:31.362803 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:46:31.362814 | orchestrator | 2025-05-30 00:46:31.362825 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-30 00:46:31.362836 | orchestrator | Friday 30 May 2025 00:46:21 +0000 (0:00:01.367) 0:00:50.194 ************ 2025-05-30 00:46:31.362846 | orchestrator | ok: [testbed-manager] 2025-05-30 00:46:31.362857 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:46:31.362868 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:46:31.362878 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:46:31.362895 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:46:31.362906 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:46:31.362916 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:46:31.362927 | orchestrator | 2025-05-30 00:46:31.362938 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-30 00:46:31.362948 | orchestrator | Friday 30 May 2025 00:46:23 +0000 (0:00:02.217) 0:00:52.412 ************ 2025-05-30 00:46:31.362959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-30 00:46:31.362971 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:46:31.362982 | orchestrator | 2025-05-30 00:46:31.362993 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-30 00:46:31.363004 | orchestrator | Friday 30 May 2025 00:46:24 +0000 (0:00:01.582) 0:00:53.994 ************ 2025-05-30 00:46:31.363014 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.363025 | orchestrator | 2025-05-30 00:46:31.363036 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-30 00:46:31.363047 | orchestrator | Friday 30 May 2025 00:46:27 +0000 (0:00:02.765) 0:00:56.759 ************ 2025-05-30 00:46:31.363058 | orchestrator | changed: [testbed-manager] 2025-05-30 00:46:31.363069 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:46:31.363079 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:46:31.363090 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:46:31.363101 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:46:31.363111 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:46:31.363122 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:46:31.363133 | orchestrator | 2025-05-30 00:46:31.363143 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:46:31.363154 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:46:31.363165 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:46:31.363176 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:46:31.363187 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:46:31.363198 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:46:31.363209 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:46:31.363220 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:46:31.363230 | orchestrator | 2025-05-30 00:46:31.363241 | orchestrator | Friday 30 May 2025 00:46:30 +0000 (0:00:02.892) 0:00:59.651 ************ 2025-05-30 00:46:31.363252 | orchestrator | =============================================================================== 2025-05-30 00:46:31.363263 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.63s 2025-05-30 00:46:31.363273 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.09s 2025-05-30 00:46:31.363284 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.77s 2025-05-30 00:46:31.363295 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.31s 2025-05-30 00:46:31.363306 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.89s 2025-05-30 00:46:31.363322 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.77s 2025-05-30 00:46:31.363333 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.60s 2025-05-30 00:46:31.363344 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.23s 2025-05-30 00:46:31.363354 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.22s 2025-05-30 00:46:31.363365 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.07s 2025-05-30 00:46:31.363375 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.73s 2025-05-30 00:46:31.363386 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.72s 2025-05-30 00:46:31.363397 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.58s 2025-05-30 00:46:31.363408 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.37s 2025-05-30 00:46:31.363424 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.35s 2025-05-30 00:46:31.363436 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.00s 2025-05-30 00:46:31.363447 | orchestrator | 2025-05-30 00:46:31 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:31.363458 | orchestrator | 2025-05-30 00:46:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:34.416373 | orchestrator | 2025-05-30 00:46:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:34.416920 | orchestrator | 2025-05-30 00:46:34 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:34.417294 | orchestrator | 2025-05-30 00:46:34 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:34.419128 | orchestrator | 2025-05-30 00:46:34 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:34.419140 | orchestrator | 2025-05-30 00:46:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:37.456146 | orchestrator | 2025-05-30 00:46:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:37.460106 | orchestrator | 2025-05-30 00:46:37 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:37.460171 | orchestrator | 2025-05-30 00:46:37 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:37.460186 | orchestrator | 2025-05-30 00:46:37 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:37.460199 | orchestrator | 2025-05-30 00:46:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:40.504656 | orchestrator | 2025-05-30 00:46:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:40.504835 | orchestrator | 2025-05-30 00:46:40 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:40.505419 | orchestrator | 2025-05-30 00:46:40 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:40.505898 | orchestrator | 2025-05-30 00:46:40 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:40.507422 | orchestrator | 2025-05-30 00:46:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:43.554350 | orchestrator | 2025-05-30 00:46:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:43.554456 | orchestrator | 2025-05-30 00:46:43 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:43.556146 | orchestrator | 2025-05-30 00:46:43 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:43.557134 | orchestrator | 2025-05-30 00:46:43 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:43.557178 | orchestrator | 2025-05-30 00:46:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:46.618507 | orchestrator | 2025-05-30 00:46:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:46.619810 | orchestrator | 2025-05-30 00:46:46 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:46.622353 | orchestrator | 2025-05-30 00:46:46 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:46.623304 | orchestrator | 2025-05-30 00:46:46 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:46.624249 | orchestrator | 2025-05-30 00:46:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:49.679517 | orchestrator | 2025-05-30 00:46:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:49.680957 | orchestrator | 2025-05-30 00:46:49 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:49.682317 | orchestrator | 2025-05-30 00:46:49 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:49.683674 | orchestrator | 2025-05-30 00:46:49 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:49.683738 | orchestrator | 2025-05-30 00:46:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:52.735232 | orchestrator | 2025-05-30 00:46:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:52.735554 | orchestrator | 2025-05-30 00:46:52 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:52.737593 | orchestrator | 2025-05-30 00:46:52 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:52.739084 | orchestrator | 2025-05-30 00:46:52 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:52.739112 | orchestrator | 2025-05-30 00:46:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:55.776557 | orchestrator | 2025-05-30 00:46:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:55.777505 | orchestrator | 2025-05-30 00:46:55 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:55.779710 | orchestrator | 2025-05-30 00:46:55 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:55.780805 | orchestrator | 2025-05-30 00:46:55 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:55.780881 | orchestrator | 2025-05-30 00:46:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:46:58.841211 | orchestrator | 2025-05-30 00:46:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:46:58.841363 | orchestrator | 2025-05-30 00:46:58 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state STARTED 2025-05-30 00:46:58.841456 | orchestrator | 2025-05-30 00:46:58 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:46:58.841894 | orchestrator | 2025-05-30 00:46:58 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:46:58.841921 | orchestrator | 2025-05-30 00:46:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:01.897857 | orchestrator | 2025-05-30 00:47:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:01.897959 | orchestrator | 2025-05-30 00:47:01 | INFO  | Task e5c81b6d-a4eb-4404-8e83-20a9868ee367 is in state SUCCESS 2025-05-30 00:47:01.898183 | orchestrator | 2025-05-30 00:47:01 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:01.899848 | orchestrator | 2025-05-30 00:47:01 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:47:01.899874 | orchestrator | 2025-05-30 00:47:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:04.958616 | orchestrator | 2025-05-30 00:47:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:04.958867 | orchestrator | 2025-05-30 00:47:04 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:04.959154 | orchestrator | 2025-05-30 00:47:04 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:47:04.959405 | orchestrator | 2025-05-30 00:47:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:08.027279 | orchestrator | 2025-05-30 00:47:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:08.030339 | orchestrator | 2025-05-30 00:47:08 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:08.036581 | orchestrator | 2025-05-30 00:47:08 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:47:08.036653 | orchestrator | 2025-05-30 00:47:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:11.075447 | orchestrator | 2025-05-30 00:47:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:11.076591 | orchestrator | 2025-05-30 00:47:11 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:11.077920 | orchestrator | 2025-05-30 00:47:11 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:47:11.077953 | orchestrator | 2025-05-30 00:47:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:14.120549 | orchestrator | 2025-05-30 00:47:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:14.124462 | orchestrator | 2025-05-30 00:47:14 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:14.126007 | orchestrator | 2025-05-30 00:47:14 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:47:14.126152 | orchestrator | 2025-05-30 00:47:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:17.163573 | orchestrator | 2025-05-30 00:47:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:17.165499 | orchestrator | 2025-05-30 00:47:17 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:17.166584 | orchestrator | 2025-05-30 00:47:17 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:47:17.166949 | orchestrator | 2025-05-30 00:47:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:20.203172 | orchestrator | 2025-05-30 00:47:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:20.205702 | orchestrator | 2025-05-30 00:47:20 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:20.208012 | orchestrator | 2025-05-30 00:47:20 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:47:20.208063 | orchestrator | 2025-05-30 00:47:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:23.283561 | orchestrator | 2025-05-30 00:47:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:23.285272 | orchestrator | 2025-05-30 00:47:23 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:23.288213 | orchestrator | 2025-05-30 00:47:23 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:47:23.288270 | orchestrator | 2025-05-30 00:47:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:26.331800 | orchestrator | 2025-05-30 00:47:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:26.331901 | orchestrator | 2025-05-30 00:47:26 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:26.331914 | orchestrator | 2025-05-30 00:47:26 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:47:26.331925 | orchestrator | 2025-05-30 00:47:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:29.375660 | orchestrator | 2025-05-30 00:47:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:29.377039 | orchestrator | 2025-05-30 00:47:29 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:29.378183 | orchestrator | 2025-05-30 00:47:29 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:47:29.378218 | orchestrator | 2025-05-30 00:47:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:32.430497 | orchestrator | 2025-05-30 00:47:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:32.430693 | orchestrator | 2025-05-30 00:47:32 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:32.432209 | orchestrator | 2025-05-30 00:47:32 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state STARTED 2025-05-30 00:47:32.432250 | orchestrator | 2025-05-30 00:47:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:35.465671 | orchestrator | 2025-05-30 00:47:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:35.465849 | orchestrator | 2025-05-30 00:47:35 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:35.468408 | orchestrator | 2025-05-30 00:47:35 | INFO  | Task 09e83007-0245-415e-a945-18fc016cb88e is in state SUCCESS 2025-05-30 00:47:35.469390 | orchestrator | 2025-05-30 00:47:35.469413 | orchestrator | 2025-05-30 00:47:35.469424 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-30 00:47:35.469435 | orchestrator | 2025-05-30 00:47:35.469467 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-30 00:47:35.469477 | orchestrator | Friday 30 May 2025 00:45:44 +0000 (0:00:00.259) 0:00:00.259 ************ 2025-05-30 00:47:35.469487 | orchestrator | ok: [testbed-manager] 2025-05-30 00:47:35.469498 | orchestrator | 2025-05-30 00:47:35.469509 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-30 00:47:35.469519 | orchestrator | Friday 30 May 2025 00:45:45 +0000 (0:00:01.173) 0:00:01.433 ************ 2025-05-30 00:47:35.469530 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-30 00:47:35.469540 | orchestrator | 2025-05-30 00:47:35.469550 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-30 00:47:35.469559 | orchestrator | Friday 30 May 2025 00:45:46 +0000 (0:00:00.639) 0:00:02.072 ************ 2025-05-30 00:47:35.469569 | orchestrator | changed: [testbed-manager] 2025-05-30 00:47:35.469579 | orchestrator | 2025-05-30 00:47:35.469589 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-30 00:47:35.469598 | orchestrator | Friday 30 May 2025 00:45:47 +0000 (0:00:01.193) 0:00:03.265 ************ 2025-05-30 00:47:35.469670 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-30 00:47:35.469712 | orchestrator | ok: [testbed-manager] 2025-05-30 00:47:35.469723 | orchestrator | 2025-05-30 00:47:35.469756 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-30 00:47:35.469766 | orchestrator | Friday 30 May 2025 00:46:57 +0000 (0:01:09.605) 0:01:12.871 ************ 2025-05-30 00:47:35.469776 | orchestrator | changed: [testbed-manager] 2025-05-30 00:47:35.469816 | orchestrator | 2025-05-30 00:47:35.469827 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:47:35.469837 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:47:35.469849 | orchestrator | 2025-05-30 00:47:35.469858 | orchestrator | Friday 30 May 2025 00:47:00 +0000 (0:00:03.340) 0:01:16.211 ************ 2025-05-30 00:47:35.469868 | orchestrator | =============================================================================== 2025-05-30 00:47:35.469877 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 69.61s 2025-05-30 00:47:35.469887 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.34s 2025-05-30 00:47:35.469896 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.19s 2025-05-30 00:47:35.469906 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.17s 2025-05-30 00:47:35.469916 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.64s 2025-05-30 00:47:35.469925 | orchestrator | 2025-05-30 00:47:35.471359 | orchestrator | 2025-05-30 00:47:35.471398 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-30 00:47:35.471409 | orchestrator | 2025-05-30 00:47:35.471419 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-30 00:47:35.471430 | orchestrator | Friday 30 May 2025 00:45:26 +0000 (0:00:00.341) 0:00:00.341 ************ 2025-05-30 00:47:35.471440 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:47:35.471453 | orchestrator | 2025-05-30 00:47:35.471463 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-30 00:47:35.471473 | orchestrator | Friday 30 May 2025 00:45:27 +0000 (0:00:01.479) 0:00:01.820 ************ 2025-05-30 00:47:35.471482 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-30 00:47:35.471492 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-30 00:47:35.471502 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-30 00:47:35.471512 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-30 00:47:35.471521 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-30 00:47:35.471531 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-30 00:47:35.471540 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-30 00:47:35.471550 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-30 00:47:35.471559 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-30 00:47:35.471570 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-30 00:47:35.471580 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-30 00:47:35.471589 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-30 00:47:35.471599 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-30 00:47:35.471609 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-30 00:47:35.471618 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-30 00:47:35.471641 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-30 00:47:35.471651 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-30 00:47:35.471661 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-30 00:47:35.471676 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-30 00:47:35.471686 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-30 00:47:35.471696 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-30 00:47:35.471706 | orchestrator | 2025-05-30 00:47:35.471715 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-30 00:47:35.471725 | orchestrator | Friday 30 May 2025 00:45:31 +0000 (0:00:03.441) 0:00:05.262 ************ 2025-05-30 00:47:35.471757 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:47:35.471769 | orchestrator | 2025-05-30 00:47:35.471779 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-30 00:47:35.471789 | orchestrator | Friday 30 May 2025 00:45:32 +0000 (0:00:01.504) 0:00:06.766 ************ 2025-05-30 00:47:35.471806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.471822 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.471844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.471856 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.471866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.471882 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.471897 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.471909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.471920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.471938 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.471949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.471959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.471979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.471996 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.472009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.472020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.472031 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.472050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.472061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.472080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.472090 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.472100 | orchestrator | 2025-05-30 00:47:35.472110 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-30 00:47:35.472120 | orchestrator | Friday 30 May 2025 00:45:37 +0000 (0:00:04.849) 0:00:11.615 ************ 2025-05-30 00:47:35.472131 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.472142 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472153 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.472185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472211 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:47:35.472222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.472237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.472268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472297 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:47:35.472308 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:47:35.472323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.472333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472353 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:47:35.472363 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:47:35.472377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.472388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472408 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:47:35.472432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.472448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472470 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:47:35.472480 | orchestrator | 2025-05-30 00:47:35.472490 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-30 00:47:35.472500 | orchestrator | Friday 30 May 2025 00:45:39 +0000 (0:00:02.067) 0:00:13.683 ************ 2025-05-30 00:47:35.472510 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.472524 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472535 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.472561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472589 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:47:35.472599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.472610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.472644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.472670 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:47:35.472681 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:47:35.473557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.473585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.473597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.473607 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:47:35.473617 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:47:35.473627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.473642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.473653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.473664 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:47:35.473674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-30 00:47:35.473703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.473715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.473725 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:47:35.473800 | orchestrator | 2025-05-30 00:47:35.473813 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-30 00:47:35.473823 | orchestrator | Friday 30 May 2025 00:45:41 +0000 (0:00:02.434) 0:00:16.118 ************ 2025-05-30 00:47:35.473833 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:47:35.473843 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:47:35.473852 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:47:35.473862 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:47:35.473871 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:47:35.473881 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:47:35.473891 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:47:35.473900 | orchestrator | 2025-05-30 00:47:35.473910 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-30 00:47:35.473920 | orchestrator | Friday 30 May 2025 00:45:42 +0000 (0:00:00.838) 0:00:16.956 ************ 2025-05-30 00:47:35.473929 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:47:35.473939 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:47:35.473949 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:47:35.473958 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:47:35.473968 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:47:35.473977 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:47:35.473987 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:47:35.473997 | orchestrator | 2025-05-30 00:47:35.474007 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-30 00:47:35.474080 | orchestrator | Friday 30 May 2025 00:45:43 +0000 (0:00:00.834) 0:00:17.790 ************ 2025-05-30 00:47:35.474095 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:47:35.474105 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:47:35.474115 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:47:35.474124 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:47:35.474134 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:47:35.474144 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:47:35.474156 | orchestrator | changed: [testbed-manager] 2025-05-30 00:47:35.474167 | orchestrator | 2025-05-30 00:47:35.474178 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-30 00:47:35.474189 | orchestrator | Friday 30 May 2025 00:46:11 +0000 (0:00:27.596) 0:00:45.386 ************ 2025-05-30 00:47:35.474200 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:47:35.474211 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:47:35.474222 | orchestrator | ok: [testbed-manager] 2025-05-30 00:47:35.474241 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:47:35.474257 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:47:35.474268 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:47:35.474279 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:47:35.474290 | orchestrator | 2025-05-30 00:47:35.474300 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-30 00:47:35.474312 | orchestrator | Friday 30 May 2025 00:46:14 +0000 (0:00:02.920) 0:00:48.307 ************ 2025-05-30 00:47:35.474323 | orchestrator | ok: [testbed-manager] 2025-05-30 00:47:35.474333 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:47:35.474343 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:47:35.474354 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:47:35.474364 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:47:35.474375 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:47:35.474385 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:47:35.474396 | orchestrator | 2025-05-30 00:47:35.474407 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-30 00:47:35.474419 | orchestrator | Friday 30 May 2025 00:46:15 +0000 (0:00:01.298) 0:00:49.606 ************ 2025-05-30 00:47:35.474430 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:47:35.474441 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:47:35.474452 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:47:35.474462 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:47:35.474474 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:47:35.474485 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:47:35.474496 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:47:35.474505 | orchestrator | 2025-05-30 00:47:35.474515 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-30 00:47:35.474524 | orchestrator | Friday 30 May 2025 00:46:16 +0000 (0:00:01.097) 0:00:50.703 ************ 2025-05-30 00:47:35.474534 | orchestrator | skipping: [testbed-manager] 2025-05-30 00:47:35.474544 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:47:35.474553 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:47:35.474563 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:47:35.474572 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:47:35.474582 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:47:35.474591 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:47:35.474601 | orchestrator | 2025-05-30 00:47:35.474610 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-30 00:47:35.474620 | orchestrator | Friday 30 May 2025 00:46:17 +0000 (0:00:00.785) 0:00:51.489 ************ 2025-05-30 00:47:35.474638 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.474649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.474660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.474677 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.474698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.474708 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474761 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.474782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474796 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.474806 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474897 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474907 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474917 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474931 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.474942 | orchestrator | 2025-05-30 00:47:35.474951 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-30 00:47:35.474961 | orchestrator | Friday 30 May 2025 00:46:21 +0000 (0:00:04.577) 0:00:56.066 ************ 2025-05-30 00:47:35.474971 | orchestrator | [WARNING]: Skipped 2025-05-30 00:47:35.474981 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-30 00:47:35.474991 | orchestrator | to this access issue: 2025-05-30 00:47:35.475001 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-30 00:47:35.475011 | orchestrator | directory 2025-05-30 00:47:35.475020 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-30 00:47:35.475030 | orchestrator | 2025-05-30 00:47:35.475040 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-30 00:47:35.475049 | orchestrator | Friday 30 May 2025 00:46:22 +0000 (0:00:01.071) 0:00:57.137 ************ 2025-05-30 00:47:35.475059 | orchestrator | [WARNING]: Skipped 2025-05-30 00:47:35.475069 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-30 00:47:35.475078 | orchestrator | to this access issue: 2025-05-30 00:47:35.475088 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-30 00:47:35.475098 | orchestrator | directory 2025-05-30 00:47:35.475107 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-30 00:47:35.475117 | orchestrator | 2025-05-30 00:47:35.475126 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-30 00:47:35.475136 | orchestrator | Friday 30 May 2025 00:46:23 +0000 (0:00:00.586) 0:00:57.723 ************ 2025-05-30 00:47:35.475145 | orchestrator | [WARNING]: Skipped 2025-05-30 00:47:35.475155 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-30 00:47:35.475164 | orchestrator | to this access issue: 2025-05-30 00:47:35.475174 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-30 00:47:35.475193 | orchestrator | directory 2025-05-30 00:47:35.475203 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-30 00:47:35.475213 | orchestrator | 2025-05-30 00:47:35.475222 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-30 00:47:35.475237 | orchestrator | Friday 30 May 2025 00:46:24 +0000 (0:00:00.545) 0:00:58.269 ************ 2025-05-30 00:47:35.475247 | orchestrator | [WARNING]: Skipped 2025-05-30 00:47:35.475256 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-30 00:47:35.475266 | orchestrator | to this access issue: 2025-05-30 00:47:35.475276 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-30 00:47:35.475285 | orchestrator | directory 2025-05-30 00:47:35.475295 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-30 00:47:35.475304 | orchestrator | 2025-05-30 00:47:35.475314 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-30 00:47:35.475324 | orchestrator | Friday 30 May 2025 00:46:24 +0000 (0:00:00.611) 0:00:58.881 ************ 2025-05-30 00:47:35.475333 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:47:35.475343 | orchestrator | changed: [testbed-manager] 2025-05-30 00:47:35.475352 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:47:35.475362 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:47:35.475371 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:47:35.475381 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:47:35.475390 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:47:35.475400 | orchestrator | 2025-05-30 00:47:35.475410 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-30 00:47:35.475419 | orchestrator | Friday 30 May 2025 00:46:28 +0000 (0:00:04.176) 0:01:03.058 ************ 2025-05-30 00:47:35.475429 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-30 00:47:35.475438 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-30 00:47:35.475448 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-30 00:47:35.475458 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-30 00:47:35.475467 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-30 00:47:35.475477 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-30 00:47:35.475486 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-30 00:47:35.475496 | orchestrator | 2025-05-30 00:47:35.475505 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-30 00:47:35.475515 | orchestrator | Friday 30 May 2025 00:46:31 +0000 (0:00:02.815) 0:01:05.873 ************ 2025-05-30 00:47:35.475525 | orchestrator | changed: [testbed-manager] 2025-05-30 00:47:35.475534 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:47:35.475544 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:47:35.475553 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:47:35.475563 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:47:35.475572 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:47:35.475582 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:47:35.475591 | orchestrator | 2025-05-30 00:47:35.475605 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-30 00:47:35.475615 | orchestrator | Friday 30 May 2025 00:46:34 +0000 (0:00:02.677) 0:01:08.550 ************ 2025-05-30 00:47:35.475625 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.475641 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.475652 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.475667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.475679 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.475704 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.475714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.475728 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.475763 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.475773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.475796 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.475806 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.475817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.475827 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.475837 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.475859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.475870 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.475880 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.475909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:47:35.475920 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.475930 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.475940 | orchestrator | 2025-05-30 00:47:35.475950 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-30 00:47:35.475960 | orchestrator | Friday 30 May 2025 00:46:36 +0000 (0:00:02.535) 0:01:11.086 ************ 2025-05-30 00:47:35.475970 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-30 00:47:35.475979 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-30 00:47:35.475989 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-30 00:47:35.475999 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-30 00:47:35.476008 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-30 00:47:35.476023 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-30 00:47:35.476034 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-30 00:47:35.476043 | orchestrator | 2025-05-30 00:47:35.476053 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-30 00:47:35.476062 | orchestrator | Friday 30 May 2025 00:46:39 +0000 (0:00:02.659) 0:01:13.746 ************ 2025-05-30 00:47:35.476076 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-30 00:47:35.476086 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-30 00:47:35.476095 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-30 00:47:35.476105 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-30 00:47:35.476114 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-30 00:47:35.476124 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-30 00:47:35.476134 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-30 00:47:35.476143 | orchestrator | 2025-05-30 00:47:35.476153 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-30 00:47:35.476162 | orchestrator | Friday 30 May 2025 00:46:42 +0000 (0:00:02.612) 0:01:16.359 ************ 2025-05-30 00:47:35.476172 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.476183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.476198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.476208 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.476238 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.476248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476258 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476296 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.476311 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-30 00:47:35.476321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476383 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476394 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476409 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476419 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:47:35.476429 | orchestrator | 2025-05-30 00:47:35.476439 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-30 00:47:35.476449 | orchestrator | Friday 30 May 2025 00:46:45 +0000 (0:00:03.757) 0:01:20.116 ************ 2025-05-30 00:47:35.476459 | orchestrator | changed: [testbed-manager] 2025-05-30 00:47:35.476469 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:47:35.476478 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:47:35.476488 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:47:35.476497 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:47:35.476507 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:47:35.476516 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:47:35.476526 | orchestrator | 2025-05-30 00:47:35.476535 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-30 00:47:35.476550 | orchestrator | Friday 30 May 2025 00:46:47 +0000 (0:00:01.823) 0:01:21.940 ************ 2025-05-30 00:47:35.476560 | orchestrator | changed: [testbed-manager] 2025-05-30 00:47:35.476570 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:47:35.476579 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:47:35.476589 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:47:35.476598 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:47:35.476608 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:47:35.476617 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:47:35.476627 | orchestrator | 2025-05-30 00:47:35.476637 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-30 00:47:35.476646 | orchestrator | Friday 30 May 2025 00:46:49 +0000 (0:00:01.534) 0:01:23.474 ************ 2025-05-30 00:47:35.476656 | orchestrator | 2025-05-30 00:47:35.476665 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-30 00:47:35.476675 | orchestrator | Friday 30 May 2025 00:46:49 +0000 (0:00:00.056) 0:01:23.531 ************ 2025-05-30 00:47:35.476685 | orchestrator | 2025-05-30 00:47:35.476694 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-30 00:47:35.476703 | orchestrator | Friday 30 May 2025 00:46:49 +0000 (0:00:00.067) 0:01:23.599 ************ 2025-05-30 00:47:35.476713 | orchestrator | 2025-05-30 00:47:35.476723 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-30 00:47:35.476754 | orchestrator | Friday 30 May 2025 00:46:49 +0000 (0:00:00.051) 0:01:23.650 ************ 2025-05-30 00:47:35.476765 | orchestrator | 2025-05-30 00:47:35.476775 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-30 00:47:35.476784 | orchestrator | Friday 30 May 2025 00:46:49 +0000 (0:00:00.220) 0:01:23.871 ************ 2025-05-30 00:47:35.476794 | orchestrator | 2025-05-30 00:47:35.476803 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-30 00:47:35.476813 | orchestrator | Friday 30 May 2025 00:46:49 +0000 (0:00:00.055) 0:01:23.926 ************ 2025-05-30 00:47:35.476830 | orchestrator | 2025-05-30 00:47:35.476839 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-30 00:47:35.476849 | orchestrator | Friday 30 May 2025 00:46:49 +0000 (0:00:00.050) 0:01:23.976 ************ 2025-05-30 00:47:35.476858 | orchestrator | 2025-05-30 00:47:35.476868 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-30 00:47:35.476878 | orchestrator | Friday 30 May 2025 00:46:49 +0000 (0:00:00.069) 0:01:24.045 ************ 2025-05-30 00:47:35.476887 | orchestrator | changed: [testbed-manager] 2025-05-30 00:47:35.476902 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:47:35.476912 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:47:35.476922 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:47:35.476931 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:47:35.476941 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:47:35.476950 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:47:35.476960 | orchestrator | 2025-05-30 00:47:35.476969 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-30 00:47:35.476979 | orchestrator | Friday 30 May 2025 00:46:58 +0000 (0:00:08.315) 0:01:32.361 ************ 2025-05-30 00:47:35.476989 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:47:35.476998 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:47:35.477007 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:47:35.477017 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:47:35.477027 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:47:35.477036 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:47:35.477046 | orchestrator | changed: [testbed-manager] 2025-05-30 00:47:35.477055 | orchestrator | 2025-05-30 00:47:35.477065 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-30 00:47:35.477074 | orchestrator | Friday 30 May 2025 00:47:22 +0000 (0:00:23.818) 0:01:56.180 ************ 2025-05-30 00:47:35.477084 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:47:35.477094 | orchestrator | ok: [testbed-manager] 2025-05-30 00:47:35.477103 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:47:35.477113 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:47:35.477122 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:47:35.477132 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:47:35.477141 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:47:35.477151 | orchestrator | 2025-05-30 00:47:35.477160 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-30 00:47:35.477170 | orchestrator | Friday 30 May 2025 00:47:24 +0000 (0:00:02.662) 0:01:58.842 ************ 2025-05-30 00:47:35.477180 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:47:35.477189 | orchestrator | changed: [testbed-manager] 2025-05-30 00:47:35.477199 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:47:35.477208 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:47:35.477218 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:47:35.477228 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:47:35.477237 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:47:35.477246 | orchestrator | 2025-05-30 00:47:35.477256 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:47:35.477267 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-30 00:47:35.477277 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-30 00:47:35.477287 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-30 00:47:35.477297 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-30 00:47:35.477311 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-30 00:47:35.477327 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-30 00:47:35.477337 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-30 00:47:35.477346 | orchestrator | 2025-05-30 00:47:35.477356 | orchestrator | 2025-05-30 00:47:35.477365 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 00:47:35.477375 | orchestrator | Friday 30 May 2025 00:47:34 +0000 (0:00:09.528) 0:02:08.371 ************ 2025-05-30 00:47:35.477385 | orchestrator | =============================================================================== 2025-05-30 00:47:35.477394 | orchestrator | common : Ensure fluentd image is present for label check --------------- 27.60s 2025-05-30 00:47:35.477404 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 23.82s 2025-05-30 00:47:35.477413 | orchestrator | common : Restart cron container ----------------------------------------- 9.53s 2025-05-30 00:47:35.477423 | orchestrator | common : Restart fluentd container -------------------------------------- 8.32s 2025-05-30 00:47:35.477432 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.85s 2025-05-30 00:47:35.477442 | orchestrator | common : Copying over config.json files for services -------------------- 4.58s 2025-05-30 00:47:35.477451 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 4.18s 2025-05-30 00:47:35.477461 | orchestrator | common : Check common containers ---------------------------------------- 3.76s 2025-05-30 00:47:35.477470 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.44s 2025-05-30 00:47:35.477480 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.92s 2025-05-30 00:47:35.477489 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.82s 2025-05-30 00:47:35.477499 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.68s 2025-05-30 00:47:35.477508 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.66s 2025-05-30 00:47:35.477518 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.66s 2025-05-30 00:47:35.477532 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.61s 2025-05-30 00:47:35.477542 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.54s 2025-05-30 00:47:35.477552 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.43s 2025-05-30 00:47:35.477561 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.07s 2025-05-30 00:47:35.477571 | orchestrator | common : Creating log volume -------------------------------------------- 1.82s 2025-05-30 00:47:35.477580 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.53s 2025-05-30 00:47:35.477590 | orchestrator | 2025-05-30 00:47:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:38.509216 | orchestrator | 2025-05-30 00:47:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:38.509576 | orchestrator | 2025-05-30 00:47:38 | INFO  | Task ca71818c-eff1-44fe-b8f6-a11ba8f1af73 is in state STARTED 2025-05-30 00:47:38.509780 | orchestrator | 2025-05-30 00:47:38 | INFO  | Task c262c8b6-6e3b-4015-b2c8-613fecc21887 is in state STARTED 2025-05-30 00:47:38.510179 | orchestrator | 2025-05-30 00:47:38 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:47:38.510595 | orchestrator | 2025-05-30 00:47:38 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:38.511331 | orchestrator | 2025-05-30 00:47:38 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:47:38.511353 | orchestrator | 2025-05-30 00:47:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:41.537458 | orchestrator | 2025-05-30 00:47:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:41.537611 | orchestrator | 2025-05-30 00:47:41 | INFO  | Task ca71818c-eff1-44fe-b8f6-a11ba8f1af73 is in state STARTED 2025-05-30 00:47:41.537771 | orchestrator | 2025-05-30 00:47:41 | INFO  | Task c262c8b6-6e3b-4015-b2c8-613fecc21887 is in state STARTED 2025-05-30 00:47:41.537935 | orchestrator | 2025-05-30 00:47:41 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:47:41.538343 | orchestrator | 2025-05-30 00:47:41 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:41.538866 | orchestrator | 2025-05-30 00:47:41 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:47:41.538890 | orchestrator | 2025-05-30 00:47:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:44.570120 | orchestrator | 2025-05-30 00:47:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:44.570334 | orchestrator | 2025-05-30 00:47:44 | INFO  | Task ca71818c-eff1-44fe-b8f6-a11ba8f1af73 is in state STARTED 2025-05-30 00:47:44.570580 | orchestrator | 2025-05-30 00:47:44 | INFO  | Task c262c8b6-6e3b-4015-b2c8-613fecc21887 is in state STARTED 2025-05-30 00:47:44.571444 | orchestrator | 2025-05-30 00:47:44 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:47:44.572176 | orchestrator | 2025-05-30 00:47:44 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:44.572718 | orchestrator | 2025-05-30 00:47:44 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:47:44.573106 | orchestrator | 2025-05-30 00:47:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:47.626193 | orchestrator | 2025-05-30 00:47:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:47.626776 | orchestrator | 2025-05-30 00:47:47 | INFO  | Task ca71818c-eff1-44fe-b8f6-a11ba8f1af73 is in state STARTED 2025-05-30 00:47:47.627600 | orchestrator | 2025-05-30 00:47:47 | INFO  | Task c262c8b6-6e3b-4015-b2c8-613fecc21887 is in state STARTED 2025-05-30 00:47:47.628309 | orchestrator | 2025-05-30 00:47:47 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:47:47.629622 | orchestrator | 2025-05-30 00:47:47 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:47.631020 | orchestrator | 2025-05-30 00:47:47 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:47:47.631047 | orchestrator | 2025-05-30 00:47:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:50.668327 | orchestrator | 2025-05-30 00:47:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:50.668940 | orchestrator | 2025-05-30 00:47:50 | INFO  | Task ca71818c-eff1-44fe-b8f6-a11ba8f1af73 is in state STARTED 2025-05-30 00:47:50.673520 | orchestrator | 2025-05-30 00:47:50 | INFO  | Task c262c8b6-6e3b-4015-b2c8-613fecc21887 is in state STARTED 2025-05-30 00:47:50.674734 | orchestrator | 2025-05-30 00:47:50 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:47:50.675783 | orchestrator | 2025-05-30 00:47:50 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:50.676171 | orchestrator | 2025-05-30 00:47:50 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:47:50.676190 | orchestrator | 2025-05-30 00:47:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:53.725440 | orchestrator | 2025-05-30 00:47:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:53.725553 | orchestrator | 2025-05-30 00:47:53 | INFO  | Task ca71818c-eff1-44fe-b8f6-a11ba8f1af73 is in state STARTED 2025-05-30 00:47:53.728391 | orchestrator | 2025-05-30 00:47:53 | INFO  | Task c262c8b6-6e3b-4015-b2c8-613fecc21887 is in state STARTED 2025-05-30 00:47:53.730550 | orchestrator | 2025-05-30 00:47:53 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:47:53.732686 | orchestrator | 2025-05-30 00:47:53 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:53.736899 | orchestrator | 2025-05-30 00:47:53 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:47:53.736933 | orchestrator | 2025-05-30 00:47:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:56.780415 | orchestrator | 2025-05-30 00:47:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:56.782698 | orchestrator | 2025-05-30 00:47:56 | INFO  | Task ca71818c-eff1-44fe-b8f6-a11ba8f1af73 is in state SUCCESS 2025-05-30 00:47:56.789298 | orchestrator | 2025-05-30 00:47:56 | INFO  | Task c262c8b6-6e3b-4015-b2c8-613fecc21887 is in state STARTED 2025-05-30 00:47:56.793296 | orchestrator | 2025-05-30 00:47:56 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:47:56.795246 | orchestrator | 2025-05-30 00:47:56 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:47:56.796973 | orchestrator | 2025-05-30 00:47:56 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:56.798209 | orchestrator | 2025-05-30 00:47:56 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:47:56.799649 | orchestrator | 2025-05-30 00:47:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:47:59.857505 | orchestrator | 2025-05-30 00:47:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:47:59.857684 | orchestrator | 2025-05-30 00:47:59 | INFO  | Task c262c8b6-6e3b-4015-b2c8-613fecc21887 is in state STARTED 2025-05-30 00:47:59.858342 | orchestrator | 2025-05-30 00:47:59 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:47:59.859449 | orchestrator | 2025-05-30 00:47:59 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:47:59.860286 | orchestrator | 2025-05-30 00:47:59 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:47:59.864226 | orchestrator | 2025-05-30 00:47:59 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:47:59.864251 | orchestrator | 2025-05-30 00:47:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:02.903490 | orchestrator | 2025-05-30 00:48:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:02.905039 | orchestrator | 2025-05-30 00:48:02 | INFO  | Task c262c8b6-6e3b-4015-b2c8-613fecc21887 is in state STARTED 2025-05-30 00:48:02.905270 | orchestrator | 2025-05-30 00:48:02 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:02.908216 | orchestrator | 2025-05-30 00:48:02 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:02.909118 | orchestrator | 2025-05-30 00:48:02 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:02.910946 | orchestrator | 2025-05-30 00:48:02 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:02.911022 | orchestrator | 2025-05-30 00:48:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:05.951261 | orchestrator | 2025-05-30 00:48:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:05.954999 | orchestrator | 2025-05-30 00:48:05 | INFO  | Task c262c8b6-6e3b-4015-b2c8-613fecc21887 is in state STARTED 2025-05-30 00:48:05.955978 | orchestrator | 2025-05-30 00:48:05 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:05.957078 | orchestrator | 2025-05-30 00:48:05 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:05.958103 | orchestrator | 2025-05-30 00:48:05 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:05.959137 | orchestrator | 2025-05-30 00:48:05 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:05.959156 | orchestrator | 2025-05-30 00:48:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:09.009204 | orchestrator | 2025-05-30 00:48:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:09.011231 | orchestrator | 2025-05-30 00:48:09 | INFO  | Task c262c8b6-6e3b-4015-b2c8-613fecc21887 is in state STARTED 2025-05-30 00:48:09.011262 | orchestrator | 2025-05-30 00:48:09 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:09.013026 | orchestrator | 2025-05-30 00:48:09 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:09.015674 | orchestrator | 2025-05-30 00:48:09 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:09.016984 | orchestrator | 2025-05-30 00:48:09 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:09.017015 | orchestrator | 2025-05-30 00:48:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:12.056469 | orchestrator | 2025-05-30 00:48:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:12.057321 | orchestrator | 2025-05-30 00:48:12.057359 | orchestrator | 2025-05-30 00:48:12.057372 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 00:48:12.057384 | orchestrator | 2025-05-30 00:48:12.057395 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 00:48:12.057407 | orchestrator | Friday 30 May 2025 00:47:39 +0000 (0:00:00.352) 0:00:00.352 ************ 2025-05-30 00:48:12.057418 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:48:12.057430 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:48:12.057441 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:48:12.057452 | orchestrator | 2025-05-30 00:48:12.057463 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 00:48:12.057474 | orchestrator | Friday 30 May 2025 00:47:40 +0000 (0:00:00.320) 0:00:00.672 ************ 2025-05-30 00:48:12.057486 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-30 00:48:12.057498 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-30 00:48:12.057526 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-30 00:48:12.057537 | orchestrator | 2025-05-30 00:48:12.057549 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-30 00:48:12.057560 | orchestrator | 2025-05-30 00:48:12.057571 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-30 00:48:12.057582 | orchestrator | Friday 30 May 2025 00:47:40 +0000 (0:00:00.345) 0:00:01.018 ************ 2025-05-30 00:48:12.057593 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:48:12.057605 | orchestrator | 2025-05-30 00:48:12.057639 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-30 00:48:12.057650 | orchestrator | Friday 30 May 2025 00:47:40 +0000 (0:00:00.557) 0:00:01.575 ************ 2025-05-30 00:48:12.057661 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-30 00:48:12.057673 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-30 00:48:12.057684 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-30 00:48:12.057694 | orchestrator | 2025-05-30 00:48:12.057705 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-30 00:48:12.057716 | orchestrator | Friday 30 May 2025 00:47:41 +0000 (0:00:00.690) 0:00:02.266 ************ 2025-05-30 00:48:12.057726 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-30 00:48:12.057737 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-30 00:48:12.057795 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-30 00:48:12.057807 | orchestrator | 2025-05-30 00:48:12.057818 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-30 00:48:12.057829 | orchestrator | Friday 30 May 2025 00:47:43 +0000 (0:00:02.010) 0:00:04.276 ************ 2025-05-30 00:48:12.057839 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:48:12.057850 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:48:12.057861 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:48:12.057872 | orchestrator | 2025-05-30 00:48:12.057883 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-30 00:48:12.057893 | orchestrator | Friday 30 May 2025 00:47:45 +0000 (0:00:02.026) 0:00:06.303 ************ 2025-05-30 00:48:12.057904 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:48:12.057915 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:48:12.057925 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:48:12.057936 | orchestrator | 2025-05-30 00:48:12.057946 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:48:12.057958 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:48:12.057970 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:48:12.057981 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:48:12.057992 | orchestrator | 2025-05-30 00:48:12.058003 | orchestrator | 2025-05-30 00:48:12.058014 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 00:48:12.058081 | orchestrator | Friday 30 May 2025 00:47:54 +0000 (0:00:08.498) 0:00:14.802 ************ 2025-05-30 00:48:12.058092 | orchestrator | =============================================================================== 2025-05-30 00:48:12.058103 | orchestrator | memcached : Restart memcached container --------------------------------- 8.50s 2025-05-30 00:48:12.058114 | orchestrator | memcached : Check memcached container ----------------------------------- 2.03s 2025-05-30 00:48:12.058124 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.01s 2025-05-30 00:48:12.058135 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.69s 2025-05-30 00:48:12.058146 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.56s 2025-05-30 00:48:12.058157 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2025-05-30 00:48:12.058167 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-05-30 00:48:12.058178 | orchestrator | 2025-05-30 00:48:12.058189 | orchestrator | 2025-05-30 00:48:12.058199 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 00:48:12.058210 | orchestrator | 2025-05-30 00:48:12.058221 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 00:48:12.058231 | orchestrator | Friday 30 May 2025 00:47:39 +0000 (0:00:00.267) 0:00:00.267 ************ 2025-05-30 00:48:12.058252 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:48:12.058263 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:48:12.058273 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:48:12.058284 | orchestrator | 2025-05-30 00:48:12.058295 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 00:48:12.058321 | orchestrator | Friday 30 May 2025 00:47:39 +0000 (0:00:00.327) 0:00:00.595 ************ 2025-05-30 00:48:12.058333 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-30 00:48:12.058344 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-30 00:48:12.058355 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-30 00:48:12.058365 | orchestrator | 2025-05-30 00:48:12.058376 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-30 00:48:12.058387 | orchestrator | 2025-05-30 00:48:12.058397 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-30 00:48:12.058408 | orchestrator | Friday 30 May 2025 00:47:39 +0000 (0:00:00.230) 0:00:00.826 ************ 2025-05-30 00:48:12.058419 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:48:12.058430 | orchestrator | 2025-05-30 00:48:12.058440 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-30 00:48:12.058457 | orchestrator | Friday 30 May 2025 00:47:40 +0000 (0:00:00.677) 0:00:01.504 ************ 2025-05-30 00:48:12.058472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058565 | orchestrator | 2025-05-30 00:48:12.058576 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-30 00:48:12.058587 | orchestrator | Friday 30 May 2025 00:47:41 +0000 (0:00:01.110) 0:00:02.614 ************ 2025-05-30 00:48:12.058604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058688 | orchestrator | 2025-05-30 00:48:12.058699 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-30 00:48:12.058710 | orchestrator | Friday 30 May 2025 00:47:44 +0000 (0:00:02.795) 0:00:05.410 ************ 2025-05-30 00:48:12.058727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058849 | orchestrator | 2025-05-30 00:48:12.058860 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-30 00:48:12.058871 | orchestrator | Friday 30 May 2025 00:47:47 +0000 (0:00:03.074) 0:00:08.484 ************ 2025-05-30 00:48:12.058888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-30 00:48:12.058969 | orchestrator | 2025-05-30 00:48:12.058980 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-30 00:48:12.058991 | orchestrator | Friday 30 May 2025 00:47:49 +0000 (0:00:02.370) 0:00:10.854 ************ 2025-05-30 00:48:12.059002 | orchestrator | 2025-05-30 00:48:12.059019 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-30 00:48:12.059037 | orchestrator | Friday 30 May 2025 00:47:49 +0000 (0:00:00.071) 0:00:10.926 ************ 2025-05-30 00:48:12.059057 | orchestrator | 2025-05-30 00:48:12.059075 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-30 00:48:12.059092 | orchestrator | Friday 30 May 2025 00:47:49 +0000 (0:00:00.050) 0:00:10.976 ************ 2025-05-30 00:48:12.059111 | orchestrator | 2025-05-30 00:48:12.059132 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-30 00:48:12.059151 | orchestrator | Friday 30 May 2025 00:47:49 +0000 (0:00:00.055) 0:00:11.031 ************ 2025-05-30 00:48:12.059166 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:48:12.059177 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:48:12.059187 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:48:12.059198 | orchestrator | 2025-05-30 00:48:12.059209 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-30 00:48:12.059220 | orchestrator | Friday 30 May 2025 00:47:57 +0000 (0:00:07.293) 0:00:18.325 ************ 2025-05-30 00:48:12.059230 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:48:12.059241 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:48:12.059252 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:48:12.059262 | orchestrator | 2025-05-30 00:48:12.059273 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:48:12.059284 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:48:12.059295 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:48:12.059306 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:48:12.059326 | orchestrator | 2025-05-30 00:48:12.059337 | orchestrator | 2025-05-30 00:48:12.059348 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 00:48:12.059365 | orchestrator | Friday 30 May 2025 00:48:08 +0000 (0:00:11.479) 0:00:29.804 ************ 2025-05-30 00:48:12.059377 | orchestrator | =============================================================================== 2025-05-30 00:48:12.059388 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.48s 2025-05-30 00:48:12.059399 | orchestrator | redis : Restart redis container ----------------------------------------- 7.29s 2025-05-30 00:48:12.059409 | orchestrator | redis : Copying over redis config files --------------------------------- 3.07s 2025-05-30 00:48:12.059420 | orchestrator | redis : Copying over default config.json files -------------------------- 2.80s 2025-05-30 00:48:12.059431 | orchestrator | redis : Check redis containers ------------------------------------------ 2.37s 2025-05-30 00:48:12.059441 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.11s 2025-05-30 00:48:12.059452 | orchestrator | redis : include_tasks --------------------------------------------------- 0.68s 2025-05-30 00:48:12.059463 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-05-30 00:48:12.059473 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.23s 2025-05-30 00:48:12.059484 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.18s 2025-05-30 00:48:12.059494 | orchestrator | 2025-05-30 00:48:12 | INFO  | Task c262c8b6-6e3b-4015-b2c8-613fecc21887 is in state SUCCESS 2025-05-30 00:48:12.059505 | orchestrator | 2025-05-30 00:48:12 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:12.059516 | orchestrator | 2025-05-30 00:48:12 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:12.059532 | orchestrator | 2025-05-30 00:48:12 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:12.061174 | orchestrator | 2025-05-30 00:48:12 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:12.061267 | orchestrator | 2025-05-30 00:48:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:15.100532 | orchestrator | 2025-05-30 00:48:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:15.100697 | orchestrator | 2025-05-30 00:48:15 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:15.100846 | orchestrator | 2025-05-30 00:48:15 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:15.101645 | orchestrator | 2025-05-30 00:48:15 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:15.103308 | orchestrator | 2025-05-30 00:48:15 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:15.103344 | orchestrator | 2025-05-30 00:48:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:18.147993 | orchestrator | 2025-05-30 00:48:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:18.150583 | orchestrator | 2025-05-30 00:48:18 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:18.153001 | orchestrator | 2025-05-30 00:48:18 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:18.153034 | orchestrator | 2025-05-30 00:48:18 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:18.153438 | orchestrator | 2025-05-30 00:48:18 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:18.153859 | orchestrator | 2025-05-30 00:48:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:21.193170 | orchestrator | 2025-05-30 00:48:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:21.193345 | orchestrator | 2025-05-30 00:48:21 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:21.194225 | orchestrator | 2025-05-30 00:48:21 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:21.194715 | orchestrator | 2025-05-30 00:48:21 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:21.195510 | orchestrator | 2025-05-30 00:48:21 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:21.195552 | orchestrator | 2025-05-30 00:48:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:24.238316 | orchestrator | 2025-05-30 00:48:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:24.239158 | orchestrator | 2025-05-30 00:48:24 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:24.240854 | orchestrator | 2025-05-30 00:48:24 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:24.241998 | orchestrator | 2025-05-30 00:48:24 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:24.243158 | orchestrator | 2025-05-30 00:48:24 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:24.243231 | orchestrator | 2025-05-30 00:48:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:27.296624 | orchestrator | 2025-05-30 00:48:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:27.296965 | orchestrator | 2025-05-30 00:48:27 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:27.297701 | orchestrator | 2025-05-30 00:48:27 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:27.302114 | orchestrator | 2025-05-30 00:48:27 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:27.303695 | orchestrator | 2025-05-30 00:48:27 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:27.303727 | orchestrator | 2025-05-30 00:48:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:30.349475 | orchestrator | 2025-05-30 00:48:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:30.349709 | orchestrator | 2025-05-30 00:48:30 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:30.351062 | orchestrator | 2025-05-30 00:48:30 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:30.352171 | orchestrator | 2025-05-30 00:48:30 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:30.352332 | orchestrator | 2025-05-30 00:48:30 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:30.352359 | orchestrator | 2025-05-30 00:48:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:33.394918 | orchestrator | 2025-05-30 00:48:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:33.395939 | orchestrator | 2025-05-30 00:48:33 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:33.398403 | orchestrator | 2025-05-30 00:48:33 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:33.400359 | orchestrator | 2025-05-30 00:48:33 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:33.402158 | orchestrator | 2025-05-30 00:48:33 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:33.402625 | orchestrator | 2025-05-30 00:48:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:36.454448 | orchestrator | 2025-05-30 00:48:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:36.454564 | orchestrator | 2025-05-30 00:48:36 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:36.454580 | orchestrator | 2025-05-30 00:48:36 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:36.454592 | orchestrator | 2025-05-30 00:48:36 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:36.454621 | orchestrator | 2025-05-30 00:48:36 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:36.454633 | orchestrator | 2025-05-30 00:48:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:39.477423 | orchestrator | 2025-05-30 00:48:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:39.477653 | orchestrator | 2025-05-30 00:48:39 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:39.478395 | orchestrator | 2025-05-30 00:48:39 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:39.479242 | orchestrator | 2025-05-30 00:48:39 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:39.482910 | orchestrator | 2025-05-30 00:48:39 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:39.482964 | orchestrator | 2025-05-30 00:48:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:42.528184 | orchestrator | 2025-05-30 00:48:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:42.529093 | orchestrator | 2025-05-30 00:48:42 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:42.530941 | orchestrator | 2025-05-30 00:48:42 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:42.531399 | orchestrator | 2025-05-30 00:48:42 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:42.532245 | orchestrator | 2025-05-30 00:48:42 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:42.533381 | orchestrator | 2025-05-30 00:48:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:45.582300 | orchestrator | 2025-05-30 00:48:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:45.583360 | orchestrator | 2025-05-30 00:48:45 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:45.585435 | orchestrator | 2025-05-30 00:48:45 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:45.587924 | orchestrator | 2025-05-30 00:48:45 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:45.587954 | orchestrator | 2025-05-30 00:48:45 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:45.587967 | orchestrator | 2025-05-30 00:48:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:48.630167 | orchestrator | 2025-05-30 00:48:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:48.630669 | orchestrator | 2025-05-30 00:48:48 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:48.631593 | orchestrator | 2025-05-30 00:48:48 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:48.632420 | orchestrator | 2025-05-30 00:48:48 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:48.633585 | orchestrator | 2025-05-30 00:48:48 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state STARTED 2025-05-30 00:48:48.633607 | orchestrator | 2025-05-30 00:48:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:51.679056 | orchestrator | 2025-05-30 00:48:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:51.682845 | orchestrator | 2025-05-30 00:48:51 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:51.683727 | orchestrator | 2025-05-30 00:48:51 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:51.686361 | orchestrator | 2025-05-30 00:48:51 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:51.690179 | orchestrator | 2025-05-30 00:48:51 | INFO  | Task 0dccc581-f44e-4dce-956a-a820804c5e66 is in state SUCCESS 2025-05-30 00:48:51.691846 | orchestrator | 2025-05-30 00:48:51.691882 | orchestrator | 2025-05-30 00:48:51.691894 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 00:48:51.691906 | orchestrator | 2025-05-30 00:48:51.691918 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 00:48:51.691929 | orchestrator | Friday 30 May 2025 00:47:39 +0000 (0:00:00.362) 0:00:00.362 ************ 2025-05-30 00:48:51.691941 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:48:51.691952 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:48:51.691963 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:48:51.691974 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:48:51.691985 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:48:51.691996 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:48:51.692007 | orchestrator | 2025-05-30 00:48:51.692018 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 00:48:51.692044 | orchestrator | Friday 30 May 2025 00:47:39 +0000 (0:00:00.563) 0:00:00.925 ************ 2025-05-30 00:48:51.692056 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-30 00:48:51.692067 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-30 00:48:51.692078 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-30 00:48:51.692089 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-30 00:48:51.692099 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-30 00:48:51.692110 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-30 00:48:51.692121 | orchestrator | 2025-05-30 00:48:51.692132 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-30 00:48:51.692143 | orchestrator | 2025-05-30 00:48:51.692190 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-30 00:48:51.692207 | orchestrator | Friday 30 May 2025 00:47:40 +0000 (0:00:00.780) 0:00:01.706 ************ 2025-05-30 00:48:51.692220 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:48:51.692233 | orchestrator | 2025-05-30 00:48:51.692245 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-30 00:48:51.692256 | orchestrator | Friday 30 May 2025 00:47:41 +0000 (0:00:01.286) 0:00:02.992 ************ 2025-05-30 00:48:51.692267 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-30 00:48:51.692278 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-30 00:48:51.692313 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-30 00:48:51.692325 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-30 00:48:51.692335 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-30 00:48:51.692346 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-30 00:48:51.692356 | orchestrator | 2025-05-30 00:48:51.692367 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-30 00:48:51.692378 | orchestrator | Friday 30 May 2025 00:47:43 +0000 (0:00:01.502) 0:00:04.494 ************ 2025-05-30 00:48:51.692388 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-30 00:48:51.692399 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-30 00:48:51.692410 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-30 00:48:51.692420 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-30 00:48:51.692431 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-30 00:48:51.692441 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-30 00:48:51.692453 | orchestrator | 2025-05-30 00:48:51.692465 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-30 00:48:51.692477 | orchestrator | Friday 30 May 2025 00:47:45 +0000 (0:00:01.878) 0:00:06.373 ************ 2025-05-30 00:48:51.692490 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-30 00:48:51.692502 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:48:51.692516 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-30 00:48:51.692529 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:48:51.692541 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-30 00:48:51.692553 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:48:51.692565 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-30 00:48:51.692577 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:48:51.692590 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-30 00:48:51.692603 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:48:51.692615 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-30 00:48:51.692628 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:48:51.692640 | orchestrator | 2025-05-30 00:48:51.692652 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-30 00:48:51.692665 | orchestrator | Friday 30 May 2025 00:47:46 +0000 (0:00:01.559) 0:00:07.932 ************ 2025-05-30 00:48:51.692677 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:48:51.692690 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:48:51.692702 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:48:51.692715 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:48:51.692727 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:48:51.692739 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:48:51.692751 | orchestrator | 2025-05-30 00:48:51.692823 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-30 00:48:51.692837 | orchestrator | Friday 30 May 2025 00:47:47 +0000 (0:00:00.639) 0:00:08.572 ************ 2025-05-30 00:48:51.692870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.692894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.692915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.692927 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.692939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.692957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.692975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.692993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693004 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693027 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693045 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693057 | orchestrator | 2025-05-30 00:48:51.693068 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-30 00:48:51.693086 | orchestrator | Friday 30 May 2025 00:47:49 +0000 (0:00:02.000) 0:00:10.572 ************ 2025-05-30 00:48:51.693103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693127 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693150 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693172 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693263 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693275 | orchestrator | 2025-05-30 00:48:51.693286 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-30 00:48:51.693297 | orchestrator | Friday 30 May 2025 00:47:51 +0000 (0:00:02.599) 0:00:13.172 ************ 2025-05-30 00:48:51.693309 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:48:51.693319 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:48:51.693330 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:48:51.693341 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:48:51.693351 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:48:51.693362 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:48:51.693373 | orchestrator | 2025-05-30 00:48:51.693384 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-30 00:48:51.693394 | orchestrator | Friday 30 May 2025 00:47:54 +0000 (0:00:02.447) 0:00:15.619 ************ 2025-05-30 00:48:51.693405 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:48:51.693416 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:48:51.693427 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:48:51.693437 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:48:51.693448 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:48:51.693459 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:48:51.693469 | orchestrator | 2025-05-30 00:48:51.693480 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-30 00:48:51.693491 | orchestrator | Friday 30 May 2025 00:47:58 +0000 (0:00:03.750) 0:00:19.370 ************ 2025-05-30 00:48:51.693501 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:48:51.693512 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:48:51.693523 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:48:51.693534 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:48:51.693544 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:48:51.693555 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:48:51.693566 | orchestrator | 2025-05-30 00:48:51.693576 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-30 00:48:51.693587 | orchestrator | Friday 30 May 2025 00:48:00 +0000 (0:00:02.252) 0:00:21.622 ************ 2025-05-30 00:48:51.693598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693751 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693807 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-30 00:48:51.693826 | orchestrator | 2025-05-30 00:48:51.693838 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-30 00:48:51.693849 | orchestrator | Friday 30 May 2025 00:48:03 +0000 (0:00:02.737) 0:00:24.360 ************ 2025-05-30 00:48:51.693867 | orchestrator | 2025-05-30 00:48:51.693878 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-30 00:48:51.693889 | orchestrator | Friday 30 May 2025 00:48:03 +0000 (0:00:00.175) 0:00:24.535 ************ 2025-05-30 00:48:51.693900 | orchestrator | 2025-05-30 00:48:51.693911 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-30 00:48:51.693921 | orchestrator | Friday 30 May 2025 00:48:03 +0000 (0:00:00.243) 0:00:24.779 ************ 2025-05-30 00:48:51.693932 | orchestrator | 2025-05-30 00:48:51.693943 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-30 00:48:51.693953 | orchestrator | Friday 30 May 2025 00:48:03 +0000 (0:00:00.111) 0:00:24.890 ************ 2025-05-30 00:48:51.693964 | orchestrator | 2025-05-30 00:48:51.693974 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-30 00:48:51.693992 | orchestrator | Friday 30 May 2025 00:48:03 +0000 (0:00:00.237) 0:00:25.127 ************ 2025-05-30 00:48:51.694003 | orchestrator | 2025-05-30 00:48:51.694064 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-30 00:48:51.694078 | orchestrator | Friday 30 May 2025 00:48:04 +0000 (0:00:00.114) 0:00:25.241 ************ 2025-05-30 00:48:51.694090 | orchestrator | 2025-05-30 00:48:51.694100 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-30 00:48:51.694111 | orchestrator | Friday 30 May 2025 00:48:04 +0000 (0:00:00.204) 0:00:25.446 ************ 2025-05-30 00:48:51.694122 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:48:51.694133 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:48:51.694143 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:48:51.694154 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:48:51.694164 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:48:51.694175 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:48:51.694186 | orchestrator | 2025-05-30 00:48:51.694197 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-30 00:48:51.694208 | orchestrator | Friday 30 May 2025 00:48:15 +0000 (0:00:11.359) 0:00:36.805 ************ 2025-05-30 00:48:51.694469 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:48:51.694564 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:48:51.694578 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:48:51.694590 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:48:51.694600 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:48:51.694611 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:48:51.694622 | orchestrator | 2025-05-30 00:48:51.694634 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-30 00:48:51.694646 | orchestrator | Friday 30 May 2025 00:48:17 +0000 (0:00:01.597) 0:00:38.403 ************ 2025-05-30 00:48:51.694656 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:48:51.694668 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:48:51.694678 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:48:51.694689 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:48:51.694699 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:48:51.694726 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:48:51.694737 | orchestrator | 2025-05-30 00:48:51.694748 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-30 00:48:51.694804 | orchestrator | Friday 30 May 2025 00:48:27 +0000 (0:00:10.070) 0:00:48.474 ************ 2025-05-30 00:48:51.694817 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-30 00:48:51.694828 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-30 00:48:51.694839 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-30 00:48:51.694849 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-30 00:48:51.694860 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-30 00:48:51.694898 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-30 00:48:51.694910 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-30 00:48:51.694921 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-30 00:48:51.694931 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-30 00:48:51.694942 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-30 00:48:51.694953 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-30 00:48:51.694964 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-30 00:48:51.694975 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-30 00:48:51.694986 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-30 00:48:51.694996 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-30 00:48:51.695009 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-30 00:48:51.695021 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-30 00:48:51.695033 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-30 00:48:51.695045 | orchestrator | 2025-05-30 00:48:51.695057 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-30 00:48:51.695070 | orchestrator | Friday 30 May 2025 00:48:35 +0000 (0:00:08.140) 0:00:56.615 ************ 2025-05-30 00:48:51.695081 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-30 00:48:51.695094 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:48:51.695106 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-30 00:48:51.695119 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:48:51.695131 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-30 00:48:51.695143 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:48:51.695155 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-30 00:48:51.695167 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-30 00:48:51.695179 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-30 00:48:51.695191 | orchestrator | 2025-05-30 00:48:51.695204 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-30 00:48:51.695216 | orchestrator | Friday 30 May 2025 00:48:37 +0000 (0:00:02.552) 0:00:59.167 ************ 2025-05-30 00:48:51.695229 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-30 00:48:51.695240 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:48:51.695251 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-30 00:48:51.695261 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:48:51.695273 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-30 00:48:51.695283 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:48:51.695294 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-30 00:48:51.695322 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-30 00:48:51.695333 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-30 00:48:51.695344 | orchestrator | 2025-05-30 00:48:51.695355 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-30 00:48:51.695373 | orchestrator | Friday 30 May 2025 00:48:41 +0000 (0:00:03.881) 0:01:03.049 ************ 2025-05-30 00:48:51.695384 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:48:51.695394 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:48:51.695405 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:48:51.695415 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:48:51.695426 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:48:51.695436 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:48:51.695447 | orchestrator | 2025-05-30 00:48:51.695458 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:48:51.695475 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:48:51.695487 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:48:51.695498 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:48:51.695509 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-30 00:48:51.695520 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-30 00:48:51.695530 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-30 00:48:51.695541 | orchestrator | 2025-05-30 00:48:51.695552 | orchestrator | 2025-05-30 00:48:51.695562 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 00:48:51.695573 | orchestrator | Friday 30 May 2025 00:48:50 +0000 (0:00:08.599) 0:01:11.649 ************ 2025-05-30 00:48:51.695584 | orchestrator | =============================================================================== 2025-05-30 00:48:51.695595 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.67s 2025-05-30 00:48:51.695605 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.36s 2025-05-30 00:48:51.695616 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.14s 2025-05-30 00:48:51.695626 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.88s 2025-05-30 00:48:51.695637 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 3.75s 2025-05-30 00:48:51.695648 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.74s 2025-05-30 00:48:51.695658 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.60s 2025-05-30 00:48:51.695669 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.55s 2025-05-30 00:48:51.695679 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 2.45s 2025-05-30 00:48:51.695690 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.25s 2025-05-30 00:48:51.695700 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.00s 2025-05-30 00:48:51.695711 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.88s 2025-05-30 00:48:51.695721 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.60s 2025-05-30 00:48:51.695732 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.56s 2025-05-30 00:48:51.695743 | orchestrator | module-load : Load modules ---------------------------------------------- 1.50s 2025-05-30 00:48:51.695777 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.29s 2025-05-30 00:48:51.695797 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.09s 2025-05-30 00:48:51.695829 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2025-05-30 00:48:51.695848 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.64s 2025-05-30 00:48:51.695859 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.56s 2025-05-30 00:48:51.695919 | orchestrator | 2025-05-30 00:48:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:54.728558 | orchestrator | 2025-05-30 00:48:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:54.729631 | orchestrator | 2025-05-30 00:48:54 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:48:54.731277 | orchestrator | 2025-05-30 00:48:54 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:54.732586 | orchestrator | 2025-05-30 00:48:54 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:54.733871 | orchestrator | 2025-05-30 00:48:54 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:54.734176 | orchestrator | 2025-05-30 00:48:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:48:57.774870 | orchestrator | 2025-05-30 00:48:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:48:57.774981 | orchestrator | 2025-05-30 00:48:57 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:48:57.774997 | orchestrator | 2025-05-30 00:48:57 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:48:57.775514 | orchestrator | 2025-05-30 00:48:57 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:48:57.777204 | orchestrator | 2025-05-30 00:48:57 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:48:57.777228 | orchestrator | 2025-05-30 00:48:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:00.821578 | orchestrator | 2025-05-30 00:49:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:00.821683 | orchestrator | 2025-05-30 00:49:00 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:00.821698 | orchestrator | 2025-05-30 00:49:00 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:00.821711 | orchestrator | 2025-05-30 00:49:00 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:00.821723 | orchestrator | 2025-05-30 00:49:00 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:00.821734 | orchestrator | 2025-05-30 00:49:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:03.843679 | orchestrator | 2025-05-30 00:49:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:03.846944 | orchestrator | 2025-05-30 00:49:03 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:03.849973 | orchestrator | 2025-05-30 00:49:03 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:03.850399 | orchestrator | 2025-05-30 00:49:03 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:03.850855 | orchestrator | 2025-05-30 00:49:03 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:03.853473 | orchestrator | 2025-05-30 00:49:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:06.881349 | orchestrator | 2025-05-30 00:49:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:06.883187 | orchestrator | 2025-05-30 00:49:06 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:06.884094 | orchestrator | 2025-05-30 00:49:06 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:06.884942 | orchestrator | 2025-05-30 00:49:06 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:06.885972 | orchestrator | 2025-05-30 00:49:06 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:06.886007 | orchestrator | 2025-05-30 00:49:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:09.935246 | orchestrator | 2025-05-30 00:49:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:09.935627 | orchestrator | 2025-05-30 00:49:09 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:09.937233 | orchestrator | 2025-05-30 00:49:09 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:09.937946 | orchestrator | 2025-05-30 00:49:09 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:09.939275 | orchestrator | 2025-05-30 00:49:09 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:09.940248 | orchestrator | 2025-05-30 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:12.984919 | orchestrator | 2025-05-30 00:49:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:12.985203 | orchestrator | 2025-05-30 00:49:12 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:12.986089 | orchestrator | 2025-05-30 00:49:12 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:12.987384 | orchestrator | 2025-05-30 00:49:12 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:12.991033 | orchestrator | 2025-05-30 00:49:12 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:12.991066 | orchestrator | 2025-05-30 00:49:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:16.033953 | orchestrator | 2025-05-30 00:49:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:16.034666 | orchestrator | 2025-05-30 00:49:16 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:16.036551 | orchestrator | 2025-05-30 00:49:16 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:16.037681 | orchestrator | 2025-05-30 00:49:16 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:16.039288 | orchestrator | 2025-05-30 00:49:16 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:16.039381 | orchestrator | 2025-05-30 00:49:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:19.078328 | orchestrator | 2025-05-30 00:49:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:19.078706 | orchestrator | 2025-05-30 00:49:19 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:19.082310 | orchestrator | 2025-05-30 00:49:19 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:19.084721 | orchestrator | 2025-05-30 00:49:19 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:19.086435 | orchestrator | 2025-05-30 00:49:19 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:19.086606 | orchestrator | 2025-05-30 00:49:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:22.137170 | orchestrator | 2025-05-30 00:49:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:22.138104 | orchestrator | 2025-05-30 00:49:22 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:22.139246 | orchestrator | 2025-05-30 00:49:22 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:22.140352 | orchestrator | 2025-05-30 00:49:22 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:22.141604 | orchestrator | 2025-05-30 00:49:22 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:22.141708 | orchestrator | 2025-05-30 00:49:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:25.185130 | orchestrator | 2025-05-30 00:49:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:25.187411 | orchestrator | 2025-05-30 00:49:25 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:25.190479 | orchestrator | 2025-05-30 00:49:25 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:25.192900 | orchestrator | 2025-05-30 00:49:25 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:25.196970 | orchestrator | 2025-05-30 00:49:25 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:25.197011 | orchestrator | 2025-05-30 00:49:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:28.246195 | orchestrator | 2025-05-30 00:49:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:28.253196 | orchestrator | 2025-05-30 00:49:28 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:28.253234 | orchestrator | 2025-05-30 00:49:28 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:28.254571 | orchestrator | 2025-05-30 00:49:28 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:28.255929 | orchestrator | 2025-05-30 00:49:28 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:28.255956 | orchestrator | 2025-05-30 00:49:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:31.298385 | orchestrator | 2025-05-30 00:49:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:31.299860 | orchestrator | 2025-05-30 00:49:31 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:31.299948 | orchestrator | 2025-05-30 00:49:31 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:31.301411 | orchestrator | 2025-05-30 00:49:31 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:31.302626 | orchestrator | 2025-05-30 00:49:31 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:31.302969 | orchestrator | 2025-05-30 00:49:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:34.362336 | orchestrator | 2025-05-30 00:49:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:34.364250 | orchestrator | 2025-05-30 00:49:34 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:34.365499 | orchestrator | 2025-05-30 00:49:34 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:34.369114 | orchestrator | 2025-05-30 00:49:34 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:34.369690 | orchestrator | 2025-05-30 00:49:34 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:34.369726 | orchestrator | 2025-05-30 00:49:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:37.410587 | orchestrator | 2025-05-30 00:49:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:37.413670 | orchestrator | 2025-05-30 00:49:37 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:37.414308 | orchestrator | 2025-05-30 00:49:37 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:37.414918 | orchestrator | 2025-05-30 00:49:37 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:37.415402 | orchestrator | 2025-05-30 00:49:37 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:37.415739 | orchestrator | 2025-05-30 00:49:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:40.455984 | orchestrator | 2025-05-30 00:49:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:40.458105 | orchestrator | 2025-05-30 00:49:40 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:40.459150 | orchestrator | 2025-05-30 00:49:40 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:40.464416 | orchestrator | 2025-05-30 00:49:40 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:40.466109 | orchestrator | 2025-05-30 00:49:40 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:40.466144 | orchestrator | 2025-05-30 00:49:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:43.507117 | orchestrator | 2025-05-30 00:49:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:43.514707 | orchestrator | 2025-05-30 00:49:43 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:43.514821 | orchestrator | 2025-05-30 00:49:43 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:43.520429 | orchestrator | 2025-05-30 00:49:43 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:43.524068 | orchestrator | 2025-05-30 00:49:43 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:43.524939 | orchestrator | 2025-05-30 00:49:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:46.563668 | orchestrator | 2025-05-30 00:49:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:46.564006 | orchestrator | 2025-05-30 00:49:46 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:46.569178 | orchestrator | 2025-05-30 00:49:46 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:46.571288 | orchestrator | 2025-05-30 00:49:46 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:46.572596 | orchestrator | 2025-05-30 00:49:46 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:46.572668 | orchestrator | 2025-05-30 00:49:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:49.626876 | orchestrator | 2025-05-30 00:49:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:49.630187 | orchestrator | 2025-05-30 00:49:49 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:49.630797 | orchestrator | 2025-05-30 00:49:49 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:49.631646 | orchestrator | 2025-05-30 00:49:49 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:49.632620 | orchestrator | 2025-05-30 00:49:49 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:49.632643 | orchestrator | 2025-05-30 00:49:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:52.673212 | orchestrator | 2025-05-30 00:49:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:52.673603 | orchestrator | 2025-05-30 00:49:52 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:52.674443 | orchestrator | 2025-05-30 00:49:52 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:52.675158 | orchestrator | 2025-05-30 00:49:52 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:52.675912 | orchestrator | 2025-05-30 00:49:52 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:52.675940 | orchestrator | 2025-05-30 00:49:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:55.710176 | orchestrator | 2025-05-30 00:49:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:55.712107 | orchestrator | 2025-05-30 00:49:55 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:55.713588 | orchestrator | 2025-05-30 00:49:55 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:55.713619 | orchestrator | 2025-05-30 00:49:55 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:55.714155 | orchestrator | 2025-05-30 00:49:55 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:55.714180 | orchestrator | 2025-05-30 00:49:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:49:58.752117 | orchestrator | 2025-05-30 00:49:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:49:58.752731 | orchestrator | 2025-05-30 00:49:58 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:49:58.753192 | orchestrator | 2025-05-30 00:49:58 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:49:58.757816 | orchestrator | 2025-05-30 00:49:58 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:49:58.759210 | orchestrator | 2025-05-30 00:49:58 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:49:58.759295 | orchestrator | 2025-05-30 00:49:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:01.795199 | orchestrator | 2025-05-30 00:50:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:01.795310 | orchestrator | 2025-05-30 00:50:01 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:01.795424 | orchestrator | 2025-05-30 00:50:01 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:50:01.795736 | orchestrator | 2025-05-30 00:50:01 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:01.796384 | orchestrator | 2025-05-30 00:50:01 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:01.796408 | orchestrator | 2025-05-30 00:50:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:04.836127 | orchestrator | 2025-05-30 00:50:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:04.836221 | orchestrator | 2025-05-30 00:50:04 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:04.836343 | orchestrator | 2025-05-30 00:50:04 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:50:04.836852 | orchestrator | 2025-05-30 00:50:04 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:04.837677 | orchestrator | 2025-05-30 00:50:04 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:04.837706 | orchestrator | 2025-05-30 00:50:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:07.873199 | orchestrator | 2025-05-30 00:50:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:07.873292 | orchestrator | 2025-05-30 00:50:07 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:07.873303 | orchestrator | 2025-05-30 00:50:07 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:50:07.873312 | orchestrator | 2025-05-30 00:50:07 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:07.873599 | orchestrator | 2025-05-30 00:50:07 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:07.873683 | orchestrator | 2025-05-30 00:50:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:10.909232 | orchestrator | 2025-05-30 00:50:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:10.909473 | orchestrator | 2025-05-30 00:50:10 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:10.909919 | orchestrator | 2025-05-30 00:50:10 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:50:10.910401 | orchestrator | 2025-05-30 00:50:10 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:10.911412 | orchestrator | 2025-05-30 00:50:10 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:10.911437 | orchestrator | 2025-05-30 00:50:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:13.948363 | orchestrator | 2025-05-30 00:50:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:13.948597 | orchestrator | 2025-05-30 00:50:13 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:13.950268 | orchestrator | 2025-05-30 00:50:13 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state STARTED 2025-05-30 00:50:13.950468 | orchestrator | 2025-05-30 00:50:13 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:13.951067 | orchestrator | 2025-05-30 00:50:13 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:13.951096 | orchestrator | 2025-05-30 00:50:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:16.983522 | orchestrator | 2025-05-30 00:50:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:16.983919 | orchestrator | 2025-05-30 00:50:16 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:16.984740 | orchestrator | 2025-05-30 00:50:16 | INFO  | Task 634c67c3-ef69-4324-a256-8cdeffc475fe is in state SUCCESS 2025-05-30 00:50:16.986223 | orchestrator | 2025-05-30 00:50:16.986263 | orchestrator | 2025-05-30 00:50:16.986275 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-30 00:50:16.986310 | orchestrator | 2025-05-30 00:50:16.986322 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-30 00:50:16.986333 | orchestrator | Friday 30 May 2025 00:48:01 +0000 (0:00:00.128) 0:00:00.128 ************ 2025-05-30 00:50:16.986344 | orchestrator | ok: [localhost] => { 2025-05-30 00:50:16.986358 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-30 00:50:16.986369 | orchestrator | } 2025-05-30 00:50:16.986380 | orchestrator | 2025-05-30 00:50:16.986391 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-30 00:50:16.986402 | orchestrator | Friday 30 May 2025 00:48:01 +0000 (0:00:00.041) 0:00:00.170 ************ 2025-05-30 00:50:16.986414 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-30 00:50:16.986427 | orchestrator | ...ignoring 2025-05-30 00:50:16.986437 | orchestrator | 2025-05-30 00:50:16.986448 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-30 00:50:16.986459 | orchestrator | Friday 30 May 2025 00:48:04 +0000 (0:00:02.739) 0:00:02.910 ************ 2025-05-30 00:50:16.986470 | orchestrator | skipping: [localhost] 2025-05-30 00:50:16.986480 | orchestrator | 2025-05-30 00:50:16.986491 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-30 00:50:16.986502 | orchestrator | Friday 30 May 2025 00:48:04 +0000 (0:00:00.101) 0:00:03.012 ************ 2025-05-30 00:50:16.986512 | orchestrator | ok: [localhost] 2025-05-30 00:50:16.986523 | orchestrator | 2025-05-30 00:50:16.986534 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 00:50:16.986544 | orchestrator | 2025-05-30 00:50:16.986555 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 00:50:16.986565 | orchestrator | Friday 30 May 2025 00:48:05 +0000 (0:00:00.439) 0:00:03.452 ************ 2025-05-30 00:50:16.986576 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:50:16.986587 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:50:16.986597 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:50:16.986608 | orchestrator | 2025-05-30 00:50:16.986619 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 00:50:16.986629 | orchestrator | Friday 30 May 2025 00:48:05 +0000 (0:00:00.606) 0:00:04.059 ************ 2025-05-30 00:50:16.986640 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-30 00:50:16.986660 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-30 00:50:16.986679 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-30 00:50:16.986697 | orchestrator | 2025-05-30 00:50:16.986717 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-30 00:50:16.986738 | orchestrator | 2025-05-30 00:50:16.986759 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-30 00:50:16.986805 | orchestrator | Friday 30 May 2025 00:48:06 +0000 (0:00:00.706) 0:00:04.765 ************ 2025-05-30 00:50:16.986824 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:50:16.986837 | orchestrator | 2025-05-30 00:50:16.986850 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-30 00:50:16.986862 | orchestrator | Friday 30 May 2025 00:48:07 +0000 (0:00:00.837) 0:00:05.603 ************ 2025-05-30 00:50:16.986874 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:50:16.986887 | orchestrator | 2025-05-30 00:50:16.986899 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-30 00:50:16.986927 | orchestrator | Friday 30 May 2025 00:48:08 +0000 (0:00:01.004) 0:00:06.607 ************ 2025-05-30 00:50:16.986939 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:50:16.986952 | orchestrator | 2025-05-30 00:50:16.986965 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-30 00:50:16.986976 | orchestrator | Friday 30 May 2025 00:48:08 +0000 (0:00:00.414) 0:00:07.022 ************ 2025-05-30 00:50:16.986998 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:50:16.987010 | orchestrator | 2025-05-30 00:50:16.987022 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-30 00:50:16.987035 | orchestrator | Friday 30 May 2025 00:48:09 +0000 (0:00:00.639) 0:00:07.661 ************ 2025-05-30 00:50:16.987046 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:50:16.987059 | orchestrator | 2025-05-30 00:50:16.987071 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-30 00:50:16.987084 | orchestrator | Friday 30 May 2025 00:48:09 +0000 (0:00:00.367) 0:00:08.028 ************ 2025-05-30 00:50:16.987096 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:50:16.987107 | orchestrator | 2025-05-30 00:50:16.987118 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-30 00:50:16.987129 | orchestrator | Friday 30 May 2025 00:48:10 +0000 (0:00:00.287) 0:00:08.316 ************ 2025-05-30 00:50:16.987139 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:50:16.987150 | orchestrator | 2025-05-30 00:50:16.987161 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-30 00:50:16.987171 | orchestrator | Friday 30 May 2025 00:48:10 +0000 (0:00:00.769) 0:00:09.085 ************ 2025-05-30 00:50:16.987182 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:50:16.987193 | orchestrator | 2025-05-30 00:50:16.987204 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-30 00:50:16.987214 | orchestrator | Friday 30 May 2025 00:48:11 +0000 (0:00:00.776) 0:00:09.862 ************ 2025-05-30 00:50:16.987225 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:50:16.987236 | orchestrator | 2025-05-30 00:50:16.987246 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-30 00:50:16.987257 | orchestrator | Friday 30 May 2025 00:48:11 +0000 (0:00:00.321) 0:00:10.183 ************ 2025-05-30 00:50:16.987268 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:50:16.987278 | orchestrator | 2025-05-30 00:50:16.987300 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-30 00:50:16.987311 | orchestrator | Friday 30 May 2025 00:48:12 +0000 (0:00:00.320) 0:00:10.504 ************ 2025-05-30 00:50:16.987328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-30 00:50:16.987346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-30 00:50:16.987373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-30 00:50:16.987385 | orchestrator | 2025-05-30 00:50:16.987396 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-30 00:50:16.987408 | orchestrator | Friday 30 May 2025 00:48:13 +0000 (0:00:00.928) 0:00:11.432 ************ 2025-05-30 00:50:16.987429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-30 00:50:16.987442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-30 00:50:16.987460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-30 00:50:16.987479 | orchestrator | 2025-05-30 00:50:16.987490 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-30 00:50:16.987501 | orchestrator | Friday 30 May 2025 00:48:14 +0000 (0:00:01.572) 0:00:13.005 ************ 2025-05-30 00:50:16.987512 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-30 00:50:16.987523 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-30 00:50:16.987534 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-30 00:50:16.987545 | orchestrator | 2025-05-30 00:50:16.987556 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-30 00:50:16.987566 | orchestrator | Friday 30 May 2025 00:48:17 +0000 (0:00:02.314) 0:00:15.320 ************ 2025-05-30 00:50:16.987577 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-30 00:50:16.987587 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-30 00:50:16.987598 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-30 00:50:16.987609 | orchestrator | 2025-05-30 00:50:16.987661 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-30 00:50:16.987673 | orchestrator | Friday 30 May 2025 00:48:20 +0000 (0:00:03.468) 0:00:18.788 ************ 2025-05-30 00:50:16.987684 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-30 00:50:16.987696 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-30 00:50:16.987715 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-30 00:50:16.987733 | orchestrator | 2025-05-30 00:50:16.987763 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-30 00:50:16.987809 | orchestrator | Friday 30 May 2025 00:48:22 +0000 (0:00:01.781) 0:00:20.570 ************ 2025-05-30 00:50:16.987824 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-30 00:50:16.987835 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-30 00:50:16.987845 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-30 00:50:16.987856 | orchestrator | 2025-05-30 00:50:16.987866 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-30 00:50:16.987877 | orchestrator | Friday 30 May 2025 00:48:24 +0000 (0:00:01.817) 0:00:22.387 ************ 2025-05-30 00:50:16.987887 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-30 00:50:16.987898 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-30 00:50:16.987909 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-30 00:50:16.987928 | orchestrator | 2025-05-30 00:50:16.987939 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-30 00:50:16.987949 | orchestrator | Friday 30 May 2025 00:48:25 +0000 (0:00:01.528) 0:00:23.916 ************ 2025-05-30 00:50:16.987960 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-30 00:50:16.987971 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-30 00:50:16.987981 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-30 00:50:16.987992 | orchestrator | 2025-05-30 00:50:16.988002 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-30 00:50:16.988013 | orchestrator | Friday 30 May 2025 00:48:27 +0000 (0:00:01.965) 0:00:25.881 ************ 2025-05-30 00:50:16.988024 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:50:16.988034 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:50:16.988045 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:50:16.988055 | orchestrator | 2025-05-30 00:50:16.988066 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-30 00:50:16.988076 | orchestrator | Friday 30 May 2025 00:48:29 +0000 (0:00:01.590) 0:00:27.472 ************ 2025-05-30 00:50:16.988094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-30 00:50:16.988108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-30 00:50:16.988134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-30 00:50:16.988164 | orchestrator | 2025-05-30 00:50:16.988183 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-30 00:50:16.988201 | orchestrator | Friday 30 May 2025 00:48:31 +0000 (0:00:01.821) 0:00:29.294 ************ 2025-05-30 00:50:16.988214 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:50:16.988225 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:50:16.988235 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:50:16.988246 | orchestrator | 2025-05-30 00:50:16.988257 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-30 00:50:16.988269 | orchestrator | Friday 30 May 2025 00:48:31 +0000 (0:00:00.937) 0:00:30.232 ************ 2025-05-30 00:50:16.988286 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:50:16.988297 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:50:16.988308 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:50:16.988318 | orchestrator | 2025-05-30 00:50:16.988329 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-30 00:50:16.988339 | orchestrator | Friday 30 May 2025 00:48:38 +0000 (0:00:06.566) 0:00:36.798 ************ 2025-05-30 00:50:16.988350 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:50:16.988360 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:50:16.988371 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:50:16.988381 | orchestrator | 2025-05-30 00:50:16.988392 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-30 00:50:16.988403 | orchestrator | 2025-05-30 00:50:16.988413 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-30 00:50:16.988424 | orchestrator | Friday 30 May 2025 00:48:38 +0000 (0:00:00.350) 0:00:37.149 ************ 2025-05-30 00:50:16.988434 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:50:16.988445 | orchestrator | 2025-05-30 00:50:16.988458 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-30 00:50:16.988476 | orchestrator | Friday 30 May 2025 00:48:39 +0000 (0:00:00.620) 0:00:37.769 ************ 2025-05-30 00:50:16.988495 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:50:16.988510 | orchestrator | 2025-05-30 00:50:16.988521 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-30 00:50:16.988532 | orchestrator | Friday 30 May 2025 00:48:40 +0000 (0:00:00.724) 0:00:38.494 ************ 2025-05-30 00:50:16.988542 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:50:16.988553 | orchestrator | 2025-05-30 00:50:16.988564 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-30 00:50:16.988574 | orchestrator | Friday 30 May 2025 00:48:46 +0000 (0:00:06.634) 0:00:45.128 ************ 2025-05-30 00:50:16.988585 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:50:16.988596 | orchestrator | 2025-05-30 00:50:16.988606 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-30 00:50:16.988617 | orchestrator | 2025-05-30 00:50:16.988627 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-30 00:50:16.988638 | orchestrator | Friday 30 May 2025 00:49:35 +0000 (0:00:48.494) 0:01:33.623 ************ 2025-05-30 00:50:16.988649 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:50:16.988659 | orchestrator | 2025-05-30 00:50:16.988670 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-30 00:50:16.988681 | orchestrator | Friday 30 May 2025 00:49:36 +0000 (0:00:00.979) 0:01:34.603 ************ 2025-05-30 00:50:16.988691 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:50:16.988712 | orchestrator | 2025-05-30 00:50:16.988723 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-30 00:50:16.988736 | orchestrator | Friday 30 May 2025 00:49:36 +0000 (0:00:00.456) 0:01:35.059 ************ 2025-05-30 00:50:16.988754 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:50:16.988773 | orchestrator | 2025-05-30 00:50:16.988861 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-30 00:50:16.988880 | orchestrator | Friday 30 May 2025 00:49:38 +0000 (0:00:02.214) 0:01:37.274 ************ 2025-05-30 00:50:16.988894 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:50:16.988905 | orchestrator | 2025-05-30 00:50:16.988916 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-30 00:50:16.988926 | orchestrator | 2025-05-30 00:50:16.988937 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-30 00:50:16.988948 | orchestrator | Friday 30 May 2025 00:49:56 +0000 (0:00:17.184) 0:01:54.459 ************ 2025-05-30 00:50:16.988967 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:50:16.988985 | orchestrator | 2025-05-30 00:50:16.989004 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-30 00:50:16.989015 | orchestrator | Friday 30 May 2025 00:49:56 +0000 (0:00:00.689) 0:01:55.149 ************ 2025-05-30 00:50:16.989026 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:50:16.989036 | orchestrator | 2025-05-30 00:50:16.989756 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-30 00:50:16.989842 | orchestrator | Friday 30 May 2025 00:49:57 +0000 (0:00:00.351) 0:01:55.501 ************ 2025-05-30 00:50:16.989866 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:50:16.989886 | orchestrator | 2025-05-30 00:50:16.989905 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-30 00:50:16.989917 | orchestrator | Friday 30 May 2025 00:49:59 +0000 (0:00:01.903) 0:01:57.405 ************ 2025-05-30 00:50:16.989928 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:50:16.989939 | orchestrator | 2025-05-30 00:50:16.989950 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-30 00:50:16.989960 | orchestrator | 2025-05-30 00:50:16.989971 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-30 00:50:16.989981 | orchestrator | Friday 30 May 2025 00:50:12 +0000 (0:00:13.403) 0:02:10.808 ************ 2025-05-30 00:50:16.989992 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:50:16.990003 | orchestrator | 2025-05-30 00:50:16.990013 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-30 00:50:16.990127 | orchestrator | Friday 30 May 2025 00:50:13 +0000 (0:00:00.487) 0:02:11.296 ************ 2025-05-30 00:50:16.990139 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-30 00:50:16.990149 | orchestrator | enable_outward_rabbitmq_True 2025-05-30 00:50:16.990160 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-30 00:50:16.990171 | orchestrator | outward_rabbitmq_restart 2025-05-30 00:50:16.990181 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:50:16.990193 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:50:16.990203 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:50:16.990213 | orchestrator | 2025-05-30 00:50:16.990224 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-30 00:50:16.990235 | orchestrator | skipping: no hosts matched 2025-05-30 00:50:16.990246 | orchestrator | 2025-05-30 00:50:16.990256 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-30 00:50:16.990267 | orchestrator | skipping: no hosts matched 2025-05-30 00:50:16.990277 | orchestrator | 2025-05-30 00:50:16.990288 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-30 00:50:16.990298 | orchestrator | skipping: no hosts matched 2025-05-30 00:50:16.990309 | orchestrator | 2025-05-30 00:50:16.990319 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:50:16.990345 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-30 00:50:16.990357 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-30 00:50:16.990368 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:50:16.990378 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 00:50:16.990389 | orchestrator | 2025-05-30 00:50:16.990400 | orchestrator | 2025-05-30 00:50:16.990410 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 00:50:16.990421 | orchestrator | Friday 30 May 2025 00:50:15 +0000 (0:00:02.489) 0:02:13.785 ************ 2025-05-30 00:50:16.990432 | orchestrator | =============================================================================== 2025-05-30 00:50:16.990442 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.08s 2025-05-30 00:50:16.990453 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.75s 2025-05-30 00:50:16.990463 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.57s 2025-05-30 00:50:16.990474 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.47s 2025-05-30 00:50:16.990484 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.74s 2025-05-30 00:50:16.990495 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.49s 2025-05-30 00:50:16.990505 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.31s 2025-05-30 00:50:16.990515 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.29s 2025-05-30 00:50:16.990526 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.97s 2025-05-30 00:50:16.990536 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.82s 2025-05-30 00:50:16.990547 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.82s 2025-05-30 00:50:16.990557 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.78s 2025-05-30 00:50:16.990568 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.59s 2025-05-30 00:50:16.990578 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.57s 2025-05-30 00:50:16.990589 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.53s 2025-05-30 00:50:16.990599 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.53s 2025-05-30 00:50:16.990621 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.00s 2025-05-30 00:50:16.990632 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.94s 2025-05-30 00:50:16.990643 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.93s 2025-05-30 00:50:16.990654 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.84s 2025-05-30 00:50:16.990671 | orchestrator | 2025-05-30 00:50:16 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:16.990682 | orchestrator | 2025-05-30 00:50:16 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:16.990693 | orchestrator | 2025-05-30 00:50:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:20.030324 | orchestrator | 2025-05-30 00:50:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:20.030925 | orchestrator | 2025-05-30 00:50:20 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:20.030971 | orchestrator | 2025-05-30 00:50:20 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:20.034414 | orchestrator | 2025-05-30 00:50:20 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:20.034453 | orchestrator | 2025-05-30 00:50:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:23.076633 | orchestrator | 2025-05-30 00:50:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:23.076833 | orchestrator | 2025-05-30 00:50:23 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:23.079432 | orchestrator | 2025-05-30 00:50:23 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:23.079505 | orchestrator | 2025-05-30 00:50:23 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:23.079521 | orchestrator | 2025-05-30 00:50:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:26.114427 | orchestrator | 2025-05-30 00:50:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:26.115717 | orchestrator | 2025-05-30 00:50:26 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:26.117715 | orchestrator | 2025-05-30 00:50:26 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:26.123105 | orchestrator | 2025-05-30 00:50:26 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:26.123147 | orchestrator | 2025-05-30 00:50:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:29.169483 | orchestrator | 2025-05-30 00:50:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:29.170198 | orchestrator | 2025-05-30 00:50:29 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:29.172045 | orchestrator | 2025-05-30 00:50:29 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:29.175972 | orchestrator | 2025-05-30 00:50:29 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:29.175994 | orchestrator | 2025-05-30 00:50:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:32.241321 | orchestrator | 2025-05-30 00:50:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:32.241997 | orchestrator | 2025-05-30 00:50:32 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:32.244831 | orchestrator | 2025-05-30 00:50:32 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:32.247564 | orchestrator | 2025-05-30 00:50:32 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:32.247596 | orchestrator | 2025-05-30 00:50:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:35.292273 | orchestrator | 2025-05-30 00:50:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:35.292716 | orchestrator | 2025-05-30 00:50:35 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:35.297596 | orchestrator | 2025-05-30 00:50:35 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:35.299664 | orchestrator | 2025-05-30 00:50:35 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:35.299692 | orchestrator | 2025-05-30 00:50:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:38.333008 | orchestrator | 2025-05-30 00:50:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:38.333411 | orchestrator | 2025-05-30 00:50:38 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:38.334806 | orchestrator | 2025-05-30 00:50:38 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:38.336282 | orchestrator | 2025-05-30 00:50:38 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:38.336303 | orchestrator | 2025-05-30 00:50:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:41.374952 | orchestrator | 2025-05-30 00:50:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:41.376191 | orchestrator | 2025-05-30 00:50:41 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:41.377936 | orchestrator | 2025-05-30 00:50:41 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:41.378953 | orchestrator | 2025-05-30 00:50:41 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:41.378997 | orchestrator | 2025-05-30 00:50:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:44.432268 | orchestrator | 2025-05-30 00:50:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:44.433936 | orchestrator | 2025-05-30 00:50:44 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:44.435715 | orchestrator | 2025-05-30 00:50:44 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:44.437156 | orchestrator | 2025-05-30 00:50:44 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:44.438375 | orchestrator | 2025-05-30 00:50:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:47.502417 | orchestrator | 2025-05-30 00:50:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:47.504209 | orchestrator | 2025-05-30 00:50:47 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:47.505982 | orchestrator | 2025-05-30 00:50:47 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:47.507724 | orchestrator | 2025-05-30 00:50:47 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:47.507961 | orchestrator | 2025-05-30 00:50:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:50.561757 | orchestrator | 2025-05-30 00:50:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:50.562891 | orchestrator | 2025-05-30 00:50:50 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:50.565666 | orchestrator | 2025-05-30 00:50:50 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:50.566279 | orchestrator | 2025-05-30 00:50:50 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:50.566320 | orchestrator | 2025-05-30 00:50:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:53.609539 | orchestrator | 2025-05-30 00:50:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:53.610413 | orchestrator | 2025-05-30 00:50:53 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:53.613331 | orchestrator | 2025-05-30 00:50:53 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:53.615574 | orchestrator | 2025-05-30 00:50:53 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:53.616061 | orchestrator | 2025-05-30 00:50:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:56.665494 | orchestrator | 2025-05-30 00:50:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:56.666422 | orchestrator | 2025-05-30 00:50:56 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:56.666616 | orchestrator | 2025-05-30 00:50:56 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:56.667671 | orchestrator | 2025-05-30 00:50:56 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:56.667699 | orchestrator | 2025-05-30 00:50:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:50:59.724929 | orchestrator | 2025-05-30 00:50:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:50:59.726393 | orchestrator | 2025-05-30 00:50:59 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:50:59.728179 | orchestrator | 2025-05-30 00:50:59 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:50:59.730536 | orchestrator | 2025-05-30 00:50:59 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:50:59.730644 | orchestrator | 2025-05-30 00:50:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:02.768782 | orchestrator | 2025-05-30 00:51:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:02.770312 | orchestrator | 2025-05-30 00:51:02 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:51:02.772159 | orchestrator | 2025-05-30 00:51:02 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:02.774063 | orchestrator | 2025-05-30 00:51:02 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:02.774210 | orchestrator | 2025-05-30 00:51:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:05.823786 | orchestrator | 2025-05-30 00:51:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:05.826577 | orchestrator | 2025-05-30 00:51:05 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state STARTED 2025-05-30 00:51:05.831605 | orchestrator | 2025-05-30 00:51:05 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:05.832956 | orchestrator | 2025-05-30 00:51:05 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:05.833289 | orchestrator | 2025-05-30 00:51:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:08.880641 | orchestrator | 2025-05-30 00:51:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:08.881574 | orchestrator | 2025-05-30 00:51:08 | INFO  | Task 64330197-875d-4d30-967d-17fad208bab1 is in state SUCCESS 2025-05-30 00:51:08.883469 | orchestrator | 2025-05-30 00:51:08.883562 | orchestrator | 2025-05-30 00:51:08.883579 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 00:51:08.883591 | orchestrator | 2025-05-30 00:51:08.883602 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 00:51:08.883614 | orchestrator | Friday 30 May 2025 00:48:54 +0000 (0:00:00.230) 0:00:00.230 ************ 2025-05-30 00:51:08.883625 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.883638 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.883649 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.883659 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:51:08.883670 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:51:08.883681 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:51:08.883717 | orchestrator | 2025-05-30 00:51:08.883730 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 00:51:08.883741 | orchestrator | Friday 30 May 2025 00:48:54 +0000 (0:00:00.680) 0:00:00.910 ************ 2025-05-30 00:51:08.883751 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-30 00:51:08.883763 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-30 00:51:08.883774 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-30 00:51:08.883784 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-30 00:51:08.883826 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-30 00:51:08.883838 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-30 00:51:08.883849 | orchestrator | 2025-05-30 00:51:08.883860 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-30 00:51:08.883870 | orchestrator | 2025-05-30 00:51:08.883882 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-30 00:51:08.883893 | orchestrator | Friday 30 May 2025 00:48:56 +0000 (0:00:01.278) 0:00:02.189 ************ 2025-05-30 00:51:08.883905 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:51:08.883918 | orchestrator | 2025-05-30 00:51:08.883928 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-30 00:51:08.883939 | orchestrator | Friday 30 May 2025 00:48:57 +0000 (0:00:01.265) 0:00:03.454 ************ 2025-05-30 00:51:08.883952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.883980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.883993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884004 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884018 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884060 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884084 | orchestrator | 2025-05-30 00:51:08.884098 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-30 00:51:08.884110 | orchestrator | Friday 30 May 2025 00:48:58 +0000 (0:00:01.316) 0:00:04.771 ************ 2025-05-30 00:51:08.884124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884172 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884200 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884217 | orchestrator | 2025-05-30 00:51:08.884228 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-30 00:51:08.884239 | orchestrator | Friday 30 May 2025 00:49:01 +0000 (0:00:02.814) 0:00:07.585 ************ 2025-05-30 00:51:08.884250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884299 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884332 | orchestrator | 2025-05-30 00:51:08.884342 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-30 00:51:08.884353 | orchestrator | Friday 30 May 2025 00:49:03 +0000 (0:00:01.584) 0:00:09.170 ************ 2025-05-30 00:51:08.884369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884410 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884451 | orchestrator | 2025-05-30 00:51:08.884462 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-30 00:51:08.884472 | orchestrator | Friday 30 May 2025 00:49:04 +0000 (0:00:01.558) 0:00:10.728 ************ 2025-05-30 00:51:08.884483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884521 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884544 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.884562 | orchestrator | 2025-05-30 00:51:08.884573 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-30 00:51:08.884584 | orchestrator | Friday 30 May 2025 00:49:06 +0000 (0:00:01.567) 0:00:12.295 ************ 2025-05-30 00:51:08.884595 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:51:08.884607 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:51:08.884618 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:51:08.884628 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:51:08.884639 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:51:08.884662 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:51:08.884673 | orchestrator | 2025-05-30 00:51:08.884684 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-30 00:51:08.884695 | orchestrator | Friday 30 May 2025 00:49:09 +0000 (0:00:02.826) 0:00:15.121 ************ 2025-05-30 00:51:08.884706 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-30 00:51:08.884716 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-30 00:51:08.884727 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-30 00:51:08.884743 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-30 00:51:08.884754 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-30 00:51:08.884765 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-30 00:51:08.884775 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-30 00:51:08.884786 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-30 00:51:08.884839 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-30 00:51:08.884851 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-30 00:51:08.884862 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-30 00:51:08.884873 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-30 00:51:08.884884 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-30 00:51:08.884896 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-30 00:51:08.884907 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-30 00:51:08.884917 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-30 00:51:08.884928 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-30 00:51:08.884939 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-30 00:51:08.884950 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-30 00:51:08.884962 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-30 00:51:08.884980 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-30 00:51:08.884991 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-30 00:51:08.885002 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-30 00:51:08.885017 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-30 00:51:08.885028 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-30 00:51:08.885039 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-30 00:51:08.885050 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-30 00:51:08.885060 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-30 00:51:08.885071 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-30 00:51:08.885081 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-30 00:51:08.885092 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-30 00:51:08.885103 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-30 00:51:08.885113 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-30 00:51:08.885124 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-30 00:51:08.885135 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-30 00:51:08.885145 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-30 00:51:08.885156 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-30 00:51:08.885167 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-30 00:51:08.885178 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-30 00:51:08.885189 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-30 00:51:08.885205 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-30 00:51:08.885216 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-30 00:51:08.885227 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-30 00:51:08.885239 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-30 00:51:08.885258 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-30 00:51:08.885269 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-30 00:51:08.885280 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-30 00:51:08.885291 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-30 00:51:08.885302 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-30 00:51:08.885319 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-30 00:51:08.885330 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-30 00:51:08.885340 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-30 00:51:08.885351 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-30 00:51:08.885362 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-30 00:51:08.885372 | orchestrator | 2025-05-30 00:51:08.885383 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-30 00:51:08.885394 | orchestrator | Friday 30 May 2025 00:49:28 +0000 (0:00:19.580) 0:00:34.702 ************ 2025-05-30 00:51:08.885405 | orchestrator | 2025-05-30 00:51:08.885416 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-30 00:51:08.885426 | orchestrator | Friday 30 May 2025 00:49:28 +0000 (0:00:00.056) 0:00:34.759 ************ 2025-05-30 00:51:08.885436 | orchestrator | 2025-05-30 00:51:08.885447 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-30 00:51:08.885458 | orchestrator | Friday 30 May 2025 00:49:29 +0000 (0:00:00.230) 0:00:34.989 ************ 2025-05-30 00:51:08.885469 | orchestrator | 2025-05-30 00:51:08.885484 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-30 00:51:08.885495 | orchestrator | Friday 30 May 2025 00:49:29 +0000 (0:00:00.068) 0:00:35.057 ************ 2025-05-30 00:51:08.885506 | orchestrator | 2025-05-30 00:51:08.885516 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-30 00:51:08.885527 | orchestrator | Friday 30 May 2025 00:49:29 +0000 (0:00:00.078) 0:00:35.136 ************ 2025-05-30 00:51:08.885537 | orchestrator | 2025-05-30 00:51:08.885548 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-30 00:51:08.885558 | orchestrator | Friday 30 May 2025 00:49:29 +0000 (0:00:00.066) 0:00:35.202 ************ 2025-05-30 00:51:08.885569 | orchestrator | 2025-05-30 00:51:08.885579 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-30 00:51:08.885590 | orchestrator | Friday 30 May 2025 00:49:29 +0000 (0:00:00.059) 0:00:35.262 ************ 2025-05-30 00:51:08.885601 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:51:08.885611 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.885622 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.885633 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.885643 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:51:08.885654 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:51:08.885664 | orchestrator | 2025-05-30 00:51:08.885675 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-30 00:51:08.885686 | orchestrator | Friday 30 May 2025 00:49:31 +0000 (0:00:02.094) 0:00:37.356 ************ 2025-05-30 00:51:08.885696 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:51:08.885707 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:51:08.885718 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:51:08.885728 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:51:08.885739 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:51:08.885749 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:51:08.885759 | orchestrator | 2025-05-30 00:51:08.885770 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-30 00:51:08.885780 | orchestrator | 2025-05-30 00:51:08.885814 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-30 00:51:08.885835 | orchestrator | Friday 30 May 2025 00:49:56 +0000 (0:00:25.430) 0:01:02.787 ************ 2025-05-30 00:51:08.885855 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:51:08.885879 | orchestrator | 2025-05-30 00:51:08.885890 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-30 00:51:08.885901 | orchestrator | Friday 30 May 2025 00:49:57 +0000 (0:00:01.120) 0:01:03.907 ************ 2025-05-30 00:51:08.885912 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:51:08.885923 | orchestrator | 2025-05-30 00:51:08.885940 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-30 00:51:08.885951 | orchestrator | Friday 30 May 2025 00:49:58 +0000 (0:00:00.920) 0:01:04.828 ************ 2025-05-30 00:51:08.885962 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.885972 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.885983 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.885994 | orchestrator | 2025-05-30 00:51:08.886004 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-30 00:51:08.886015 | orchestrator | Friday 30 May 2025 00:49:59 +0000 (0:00:01.087) 0:01:05.915 ************ 2025-05-30 00:51:08.886085 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.886096 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.886107 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.886123 | orchestrator | 2025-05-30 00:51:08.886142 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-30 00:51:08.886162 | orchestrator | Friday 30 May 2025 00:50:00 +0000 (0:00:00.322) 0:01:06.238 ************ 2025-05-30 00:51:08.886181 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.886200 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.886212 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.886227 | orchestrator | 2025-05-30 00:51:08.886245 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-30 00:51:08.886262 | orchestrator | Friday 30 May 2025 00:50:00 +0000 (0:00:00.404) 0:01:06.642 ************ 2025-05-30 00:51:08.886279 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.886297 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.886315 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.886334 | orchestrator | 2025-05-30 00:51:08.886353 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-30 00:51:08.886380 | orchestrator | Friday 30 May 2025 00:50:01 +0000 (0:00:00.460) 0:01:07.102 ************ 2025-05-30 00:51:08.886391 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.886412 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.886423 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.886434 | orchestrator | 2025-05-30 00:51:08.886444 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-30 00:51:08.886455 | orchestrator | Friday 30 May 2025 00:50:01 +0000 (0:00:00.278) 0:01:07.381 ************ 2025-05-30 00:51:08.886466 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.886477 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.886487 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.886498 | orchestrator | 2025-05-30 00:51:08.886509 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-30 00:51:08.886520 | orchestrator | Friday 30 May 2025 00:50:01 +0000 (0:00:00.337) 0:01:07.719 ************ 2025-05-30 00:51:08.886530 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.886541 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.886552 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.886562 | orchestrator | 2025-05-30 00:51:08.886573 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-30 00:51:08.886584 | orchestrator | Friday 30 May 2025 00:50:02 +0000 (0:00:00.343) 0:01:08.062 ************ 2025-05-30 00:51:08.886595 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.886605 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.886616 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.886626 | orchestrator | 2025-05-30 00:51:08.886647 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-30 00:51:08.886680 | orchestrator | Friday 30 May 2025 00:50:02 +0000 (0:00:00.355) 0:01:08.418 ************ 2025-05-30 00:51:08.886691 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.886702 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.886712 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.886723 | orchestrator | 2025-05-30 00:51:08.886734 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-30 00:51:08.886745 | orchestrator | Friday 30 May 2025 00:50:02 +0000 (0:00:00.227) 0:01:08.645 ************ 2025-05-30 00:51:08.886755 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.886766 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.886777 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.886787 | orchestrator | 2025-05-30 00:51:08.886827 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-30 00:51:08.886839 | orchestrator | Friday 30 May 2025 00:50:03 +0000 (0:00:00.358) 0:01:09.004 ************ 2025-05-30 00:51:08.886850 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.886860 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.886871 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.886881 | orchestrator | 2025-05-30 00:51:08.886892 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-30 00:51:08.886902 | orchestrator | Friday 30 May 2025 00:50:03 +0000 (0:00:00.375) 0:01:09.379 ************ 2025-05-30 00:51:08.886913 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.886924 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.886934 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.886944 | orchestrator | 2025-05-30 00:51:08.886955 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-30 00:51:08.886965 | orchestrator | Friday 30 May 2025 00:50:03 +0000 (0:00:00.336) 0:01:09.715 ************ 2025-05-30 00:51:08.886976 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.886987 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.886997 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.887008 | orchestrator | 2025-05-30 00:51:08.887018 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-30 00:51:08.887029 | orchestrator | Friday 30 May 2025 00:50:04 +0000 (0:00:00.321) 0:01:10.037 ************ 2025-05-30 00:51:08.887039 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.887050 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.887060 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.887071 | orchestrator | 2025-05-30 00:51:08.887081 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-30 00:51:08.887092 | orchestrator | Friday 30 May 2025 00:50:04 +0000 (0:00:00.507) 0:01:10.545 ************ 2025-05-30 00:51:08.887103 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.887114 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.887124 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.887135 | orchestrator | 2025-05-30 00:51:08.887153 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-30 00:51:08.887165 | orchestrator | Friday 30 May 2025 00:50:05 +0000 (0:00:00.544) 0:01:11.089 ************ 2025-05-30 00:51:08.887176 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.887186 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.887197 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.887208 | orchestrator | 2025-05-30 00:51:08.887218 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-30 00:51:08.887229 | orchestrator | Friday 30 May 2025 00:50:05 +0000 (0:00:00.420) 0:01:11.509 ************ 2025-05-30 00:51:08.887239 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.887250 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.887261 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.887271 | orchestrator | 2025-05-30 00:51:08.887282 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-30 00:51:08.887293 | orchestrator | Friday 30 May 2025 00:50:06 +0000 (0:00:00.566) 0:01:12.076 ************ 2025-05-30 00:51:08.887310 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:51:08.887321 | orchestrator | 2025-05-30 00:51:08.887332 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-30 00:51:08.887342 | orchestrator | Friday 30 May 2025 00:50:07 +0000 (0:00:00.999) 0:01:13.075 ************ 2025-05-30 00:51:08.887352 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.887363 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.887374 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.887384 | orchestrator | 2025-05-30 00:51:08.887395 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-30 00:51:08.887405 | orchestrator | Friday 30 May 2025 00:50:07 +0000 (0:00:00.763) 0:01:13.838 ************ 2025-05-30 00:51:08.887416 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.887427 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.887437 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.887448 | orchestrator | 2025-05-30 00:51:08.887458 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-30 00:51:08.887469 | orchestrator | Friday 30 May 2025 00:50:09 +0000 (0:00:01.270) 0:01:15.108 ************ 2025-05-30 00:51:08.887479 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.887490 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.887501 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.887519 | orchestrator | 2025-05-30 00:51:08.887537 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-30 00:51:08.887556 | orchestrator | Friday 30 May 2025 00:50:09 +0000 (0:00:00.436) 0:01:15.545 ************ 2025-05-30 00:51:08.887574 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.887594 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.887613 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.887625 | orchestrator | 2025-05-30 00:51:08.887636 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-30 00:51:08.887646 | orchestrator | Friday 30 May 2025 00:50:09 +0000 (0:00:00.349) 0:01:15.895 ************ 2025-05-30 00:51:08.887657 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.887668 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.887678 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.887689 | orchestrator | 2025-05-30 00:51:08.887700 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-30 00:51:08.887716 | orchestrator | Friday 30 May 2025 00:50:10 +0000 (0:00:00.282) 0:01:16.178 ************ 2025-05-30 00:51:08.887727 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.887738 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.887749 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.887759 | orchestrator | 2025-05-30 00:51:08.887770 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-30 00:51:08.887780 | orchestrator | Friday 30 May 2025 00:50:10 +0000 (0:00:00.515) 0:01:16.693 ************ 2025-05-30 00:51:08.887825 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.887840 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.887850 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.887861 | orchestrator | 2025-05-30 00:51:08.887872 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-30 00:51:08.887882 | orchestrator | Friday 30 May 2025 00:50:11 +0000 (0:00:00.434) 0:01:17.128 ************ 2025-05-30 00:51:08.887893 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.887903 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.887914 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.887924 | orchestrator | 2025-05-30 00:51:08.887935 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-30 00:51:08.887946 | orchestrator | Friday 30 May 2025 00:50:11 +0000 (0:00:00.386) 0:01:17.515 ************ 2025-05-30 00:51:08.887965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888121 | orchestrator | 2025-05-30 00:51:08.888132 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-30 00:51:08.888143 | orchestrator | Friday 30 May 2025 00:50:13 +0000 (0:00:01.572) 0:01:19.088 ************ 2025-05-30 00:51:08.888161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888296 | orchestrator | 2025-05-30 00:51:08.888315 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-30 00:51:08.888346 | orchestrator | Friday 30 May 2025 00:50:17 +0000 (0:00:03.904) 0:01:22.992 ************ 2025-05-30 00:51:08.888367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.888682 | orchestrator | 2025-05-30 00:51:08.888710 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-30 00:51:08.888732 | orchestrator | Friday 30 May 2025 00:50:19 +0000 (0:00:02.446) 0:01:25.439 ************ 2025-05-30 00:51:08.888750 | orchestrator | 2025-05-30 00:51:08.888770 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-30 00:51:08.888789 | orchestrator | Friday 30 May 2025 00:50:19 +0000 (0:00:00.060) 0:01:25.499 ************ 2025-05-30 00:51:08.888825 | orchestrator | 2025-05-30 00:51:08.888836 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-30 00:51:08.888855 | orchestrator | Friday 30 May 2025 00:50:19 +0000 (0:00:00.063) 0:01:25.563 ************ 2025-05-30 00:51:08.888873 | orchestrator | 2025-05-30 00:51:08.888892 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-30 00:51:08.888911 | orchestrator | Friday 30 May 2025 00:50:19 +0000 (0:00:00.220) 0:01:25.783 ************ 2025-05-30 00:51:08.888923 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:51:08.888934 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:51:08.888945 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:51:08.888955 | orchestrator | 2025-05-30 00:51:08.888966 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-30 00:51:08.888977 | orchestrator | Friday 30 May 2025 00:50:22 +0000 (0:00:02.942) 0:01:28.726 ************ 2025-05-30 00:51:08.888987 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:51:08.888998 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:51:08.889009 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:51:08.889019 | orchestrator | 2025-05-30 00:51:08.889030 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-30 00:51:08.889041 | orchestrator | Friday 30 May 2025 00:50:25 +0000 (0:00:02.532) 0:01:31.258 ************ 2025-05-30 00:51:08.889051 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:51:08.889062 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:51:08.889073 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:51:08.889086 | orchestrator | 2025-05-30 00:51:08.889104 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-30 00:51:08.889124 | orchestrator | Friday 30 May 2025 00:50:28 +0000 (0:00:03.073) 0:01:34.331 ************ 2025-05-30 00:51:08.889144 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.889156 | orchestrator | 2025-05-30 00:51:08.889167 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-30 00:51:08.889178 | orchestrator | Friday 30 May 2025 00:50:28 +0000 (0:00:00.124) 0:01:34.456 ************ 2025-05-30 00:51:08.889189 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.889199 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.889210 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.889221 | orchestrator | 2025-05-30 00:51:08.889240 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-30 00:51:08.889251 | orchestrator | Friday 30 May 2025 00:50:29 +0000 (0:00:00.923) 0:01:35.380 ************ 2025-05-30 00:51:08.889262 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.889273 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.889284 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:51:08.889294 | orchestrator | 2025-05-30 00:51:08.889305 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-30 00:51:08.889317 | orchestrator | Friday 30 May 2025 00:50:30 +0000 (0:00:00.601) 0:01:35.981 ************ 2025-05-30 00:51:08.889335 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.889355 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.889374 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.889387 | orchestrator | 2025-05-30 00:51:08.889401 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-30 00:51:08.889420 | orchestrator | Friday 30 May 2025 00:50:31 +0000 (0:00:01.019) 0:01:37.001 ************ 2025-05-30 00:51:08.889439 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.889457 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:51:08.889488 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.889505 | orchestrator | 2025-05-30 00:51:08.889516 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-30 00:51:08.889527 | orchestrator | Friday 30 May 2025 00:50:31 +0000 (0:00:00.765) 0:01:37.767 ************ 2025-05-30 00:51:08.889538 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.889549 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.889559 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.889570 | orchestrator | 2025-05-30 00:51:08.889581 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-30 00:51:08.889591 | orchestrator | Friday 30 May 2025 00:50:33 +0000 (0:00:01.311) 0:01:39.078 ************ 2025-05-30 00:51:08.889602 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.889612 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.889623 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.889633 | orchestrator | 2025-05-30 00:51:08.889644 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-30 00:51:08.889671 | orchestrator | Friday 30 May 2025 00:50:34 +0000 (0:00:00.868) 0:01:39.946 ************ 2025-05-30 00:51:08.889682 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.889702 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.889714 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.889724 | orchestrator | 2025-05-30 00:51:08.889735 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-30 00:51:08.889746 | orchestrator | Friday 30 May 2025 00:50:34 +0000 (0:00:00.477) 0:01:40.424 ************ 2025-05-30 00:51:08.889757 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.889775 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.889787 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.889825 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.889837 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.889849 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.889875 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.889888 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.889899 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.889910 | orchestrator | 2025-05-30 00:51:08.889921 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-30 00:51:08.889933 | orchestrator | Friday 30 May 2025 00:50:36 +0000 (0:00:01.554) 0:01:41.979 ************ 2025-05-30 00:51:08.889944 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.889955 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.889972 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.889983 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.889995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890083 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890121 | orchestrator | 2025-05-30 00:51:08.890131 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-30 00:51:08.890142 | orchestrator | Friday 30 May 2025 00:50:40 +0000 (0:00:04.158) 0:01:46.137 ************ 2025-05-30 00:51:08.890154 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890165 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890176 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890192 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890204 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890215 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890232 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890250 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890263 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 00:51:08.890274 | orchestrator | 2025-05-30 00:51:08.890285 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-30 00:51:08.890295 | orchestrator | Friday 30 May 2025 00:50:43 +0000 (0:00:02.930) 0:01:49.068 ************ 2025-05-30 00:51:08.890306 | orchestrator | 2025-05-30 00:51:08.890317 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-30 00:51:08.890328 | orchestrator | Friday 30 May 2025 00:50:43 +0000 (0:00:00.244) 0:01:49.313 ************ 2025-05-30 00:51:08.890339 | orchestrator | 2025-05-30 00:51:08.890349 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-30 00:51:08.890360 | orchestrator | Friday 30 May 2025 00:50:43 +0000 (0:00:00.066) 0:01:49.380 ************ 2025-05-30 00:51:08.890371 | orchestrator | 2025-05-30 00:51:08.890382 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-30 00:51:08.890393 | orchestrator | Friday 30 May 2025 00:50:43 +0000 (0:00:00.058) 0:01:49.438 ************ 2025-05-30 00:51:08.890403 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:51:08.890414 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:51:08.890425 | orchestrator | 2025-05-30 00:51:08.890436 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-30 00:51:08.890446 | orchestrator | Friday 30 May 2025 00:50:49 +0000 (0:00:06.289) 0:01:55.727 ************ 2025-05-30 00:51:08.890457 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:51:08.890468 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:51:08.890479 | orchestrator | 2025-05-30 00:51:08.890490 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-30 00:51:08.890501 | orchestrator | Friday 30 May 2025 00:50:56 +0000 (0:00:06.513) 0:02:02.241 ************ 2025-05-30 00:51:08.890512 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:51:08.890523 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:51:08.890533 | orchestrator | 2025-05-30 00:51:08.890544 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-30 00:51:08.890555 | orchestrator | Friday 30 May 2025 00:51:02 +0000 (0:00:06.194) 0:02:08.436 ************ 2025-05-30 00:51:08.890566 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:51:08.890577 | orchestrator | 2025-05-30 00:51:08.890588 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-30 00:51:08.890598 | orchestrator | Friday 30 May 2025 00:51:02 +0000 (0:00:00.274) 0:02:08.710 ************ 2025-05-30 00:51:08.890609 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.890632 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.890643 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.890653 | orchestrator | 2025-05-30 00:51:08.890664 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-30 00:51:08.890676 | orchestrator | Friday 30 May 2025 00:51:03 +0000 (0:00:00.751) 0:02:09.461 ************ 2025-05-30 00:51:08.890687 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.890697 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.890708 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:51:08.890719 | orchestrator | 2025-05-30 00:51:08.890739 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-30 00:51:08.890759 | orchestrator | Friday 30 May 2025 00:51:04 +0000 (0:00:00.658) 0:02:10.120 ************ 2025-05-30 00:51:08.890781 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.890857 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.890880 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.890900 | orchestrator | 2025-05-30 00:51:08.890920 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-30 00:51:08.890941 | orchestrator | Friday 30 May 2025 00:51:05 +0000 (0:00:00.934) 0:02:11.055 ************ 2025-05-30 00:51:08.890959 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:51:08.890973 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:51:08.890983 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:51:08.890994 | orchestrator | 2025-05-30 00:51:08.891005 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-30 00:51:08.891016 | orchestrator | Friday 30 May 2025 00:51:05 +0000 (0:00:00.732) 0:02:11.788 ************ 2025-05-30 00:51:08.891026 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.891036 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.891046 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.891055 | orchestrator | 2025-05-30 00:51:08.891065 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-30 00:51:08.891075 | orchestrator | Friday 30 May 2025 00:51:06 +0000 (0:00:00.722) 0:02:12.510 ************ 2025-05-30 00:51:08.891084 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:51:08.891094 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:51:08.891103 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:51:08.891113 | orchestrator | 2025-05-30 00:51:08.891123 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:51:08.891133 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-30 00:51:08.891143 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-30 00:51:08.891162 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-30 00:51:08.891173 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:51:08.891183 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:51:08.891193 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 00:51:08.891203 | orchestrator | 2025-05-30 00:51:08.891213 | orchestrator | 2025-05-30 00:51:08.891223 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 00:51:08.891233 | orchestrator | Friday 30 May 2025 00:51:07 +0000 (0:00:01.174) 0:02:13.684 ************ 2025-05-30 00:51:08.891242 | orchestrator | =============================================================================== 2025-05-30 00:51:08.891252 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 25.43s 2025-05-30 00:51:08.891270 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.58s 2025-05-30 00:51:08.891280 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.27s 2025-05-30 00:51:08.891290 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.23s 2025-05-30 00:51:08.891299 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.05s 2025-05-30 00:51:08.891309 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.16s 2025-05-30 00:51:08.891318 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.90s 2025-05-30 00:51:08.891328 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.93s 2025-05-30 00:51:08.891339 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.83s 2025-05-30 00:51:08.891348 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.81s 2025-05-30 00:51:08.891358 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.45s 2025-05-30 00:51:08.891367 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.09s 2025-05-30 00:51:08.891377 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.58s 2025-05-30 00:51:08.891387 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.57s 2025-05-30 00:51:08.891397 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.57s 2025-05-30 00:51:08.891406 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.56s 2025-05-30 00:51:08.891416 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.55s 2025-05-30 00:51:08.891425 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.32s 2025-05-30 00:51:08.891441 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.31s 2025-05-30 00:51:08.891451 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.28s 2025-05-30 00:51:08.891461 | orchestrator | 2025-05-30 00:51:08 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:08.891471 | orchestrator | 2025-05-30 00:51:08 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:08.891481 | orchestrator | 2025-05-30 00:51:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:11.934924 | orchestrator | 2025-05-30 00:51:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:11.935772 | orchestrator | 2025-05-30 00:51:11 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:11.936696 | orchestrator | 2025-05-30 00:51:11 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:11.936718 | orchestrator | 2025-05-30 00:51:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:14.992291 | orchestrator | 2025-05-30 00:51:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:14.993398 | orchestrator | 2025-05-30 00:51:14 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:14.994946 | orchestrator | 2025-05-30 00:51:14 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:14.995291 | orchestrator | 2025-05-30 00:51:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:18.050444 | orchestrator | 2025-05-30 00:51:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:18.050554 | orchestrator | 2025-05-30 00:51:18 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:18.051096 | orchestrator | 2025-05-30 00:51:18 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:18.051118 | orchestrator | 2025-05-30 00:51:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:21.113532 | orchestrator | 2025-05-30 00:51:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:21.115366 | orchestrator | 2025-05-30 00:51:21 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:21.116117 | orchestrator | 2025-05-30 00:51:21 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:21.116149 | orchestrator | 2025-05-30 00:51:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:24.169282 | orchestrator | 2025-05-30 00:51:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:24.169391 | orchestrator | 2025-05-30 00:51:24 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:24.169407 | orchestrator | 2025-05-30 00:51:24 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:24.169419 | orchestrator | 2025-05-30 00:51:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:27.214483 | orchestrator | 2025-05-30 00:51:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:27.215136 | orchestrator | 2025-05-30 00:51:27 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:27.218627 | orchestrator | 2025-05-30 00:51:27 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:27.218670 | orchestrator | 2025-05-30 00:51:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:30.254653 | orchestrator | 2025-05-30 00:51:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:30.257183 | orchestrator | 2025-05-30 00:51:30 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:30.257576 | orchestrator | 2025-05-30 00:51:30 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:30.257615 | orchestrator | 2025-05-30 00:51:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:33.283707 | orchestrator | 2025-05-30 00:51:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:33.283895 | orchestrator | 2025-05-30 00:51:33 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:33.284448 | orchestrator | 2025-05-30 00:51:33 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:33.284568 | orchestrator | 2025-05-30 00:51:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:36.322501 | orchestrator | 2025-05-30 00:51:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:36.323917 | orchestrator | 2025-05-30 00:51:36 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:36.325532 | orchestrator | 2025-05-30 00:51:36 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:36.326169 | orchestrator | 2025-05-30 00:51:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:39.377111 | orchestrator | 2025-05-30 00:51:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:39.378463 | orchestrator | 2025-05-30 00:51:39 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:39.380956 | orchestrator | 2025-05-30 00:51:39 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:39.381040 | orchestrator | 2025-05-30 00:51:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:42.428898 | orchestrator | 2025-05-30 00:51:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:42.430245 | orchestrator | 2025-05-30 00:51:42 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:42.431083 | orchestrator | 2025-05-30 00:51:42 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:42.431109 | orchestrator | 2025-05-30 00:51:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:45.488521 | orchestrator | 2025-05-30 00:51:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:45.488606 | orchestrator | 2025-05-30 00:51:45 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:45.491923 | orchestrator | 2025-05-30 00:51:45 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:45.491951 | orchestrator | 2025-05-30 00:51:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:48.543078 | orchestrator | 2025-05-30 00:51:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:48.544485 | orchestrator | 2025-05-30 00:51:48 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:48.545759 | orchestrator | 2025-05-30 00:51:48 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:48.546137 | orchestrator | 2025-05-30 00:51:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:51.582287 | orchestrator | 2025-05-30 00:51:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:51.582476 | orchestrator | 2025-05-30 00:51:51 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:51.583117 | orchestrator | 2025-05-30 00:51:51 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:51.583154 | orchestrator | 2025-05-30 00:51:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:54.643166 | orchestrator | 2025-05-30 00:51:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:54.644568 | orchestrator | 2025-05-30 00:51:54 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:54.645603 | orchestrator | 2025-05-30 00:51:54 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:54.646094 | orchestrator | 2025-05-30 00:51:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:51:57.692737 | orchestrator | 2025-05-30 00:51:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:51:57.693904 | orchestrator | 2025-05-30 00:51:57 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:51:57.695336 | orchestrator | 2025-05-30 00:51:57 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:51:57.695875 | orchestrator | 2025-05-30 00:51:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:00.747078 | orchestrator | 2025-05-30 00:52:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:00.750212 | orchestrator | 2025-05-30 00:52:00 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:00.752522 | orchestrator | 2025-05-30 00:52:00 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:00.753209 | orchestrator | 2025-05-30 00:52:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:03.802595 | orchestrator | 2025-05-30 00:52:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:03.804002 | orchestrator | 2025-05-30 00:52:03 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:03.805232 | orchestrator | 2025-05-30 00:52:03 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:03.805377 | orchestrator | 2025-05-30 00:52:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:06.862006 | orchestrator | 2025-05-30 00:52:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:06.863100 | orchestrator | 2025-05-30 00:52:06 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:06.864969 | orchestrator | 2025-05-30 00:52:06 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:06.864995 | orchestrator | 2025-05-30 00:52:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:09.917506 | orchestrator | 2025-05-30 00:52:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:09.918724 | orchestrator | 2025-05-30 00:52:09 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:09.920299 | orchestrator | 2025-05-30 00:52:09 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:09.920655 | orchestrator | 2025-05-30 00:52:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:12.958496 | orchestrator | 2025-05-30 00:52:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:12.959413 | orchestrator | 2025-05-30 00:52:12 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:12.960328 | orchestrator | 2025-05-30 00:52:12 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:12.960366 | orchestrator | 2025-05-30 00:52:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:16.011951 | orchestrator | 2025-05-30 00:52:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:16.014608 | orchestrator | 2025-05-30 00:52:16 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:16.016465 | orchestrator | 2025-05-30 00:52:16 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:16.016545 | orchestrator | 2025-05-30 00:52:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:19.116267 | orchestrator | 2025-05-30 00:52:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:19.116855 | orchestrator | 2025-05-30 00:52:19 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:19.119579 | orchestrator | 2025-05-30 00:52:19 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:19.119613 | orchestrator | 2025-05-30 00:52:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:22.170223 | orchestrator | 2025-05-30 00:52:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:22.173657 | orchestrator | 2025-05-30 00:52:22 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:22.174671 | orchestrator | 2025-05-30 00:52:22 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:22.174734 | orchestrator | 2025-05-30 00:52:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:25.234261 | orchestrator | 2025-05-30 00:52:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:25.236700 | orchestrator | 2025-05-30 00:52:25 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:25.237699 | orchestrator | 2025-05-30 00:52:25 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:25.238421 | orchestrator | 2025-05-30 00:52:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:28.289957 | orchestrator | 2025-05-30 00:52:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:28.290386 | orchestrator | 2025-05-30 00:52:28 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:28.291529 | orchestrator | 2025-05-30 00:52:28 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:28.294941 | orchestrator | 2025-05-30 00:52:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:31.340312 | orchestrator | 2025-05-30 00:52:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:31.341231 | orchestrator | 2025-05-30 00:52:31 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:31.342999 | orchestrator | 2025-05-30 00:52:31 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:31.343028 | orchestrator | 2025-05-30 00:52:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:34.388451 | orchestrator | 2025-05-30 00:52:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:34.388555 | orchestrator | 2025-05-30 00:52:34 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:34.388646 | orchestrator | 2025-05-30 00:52:34 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:34.388663 | orchestrator | 2025-05-30 00:52:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:37.440534 | orchestrator | 2025-05-30 00:52:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:37.440656 | orchestrator | 2025-05-30 00:52:37 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:37.440913 | orchestrator | 2025-05-30 00:52:37 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:37.440941 | orchestrator | 2025-05-30 00:52:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:40.486627 | orchestrator | 2025-05-30 00:52:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:40.491916 | orchestrator | 2025-05-30 00:52:40 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:40.493421 | orchestrator | 2025-05-30 00:52:40 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:40.493489 | orchestrator | 2025-05-30 00:52:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:43.552183 | orchestrator | 2025-05-30 00:52:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:43.553579 | orchestrator | 2025-05-30 00:52:43 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:43.557994 | orchestrator | 2025-05-30 00:52:43 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:43.558080 | orchestrator | 2025-05-30 00:52:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:46.605678 | orchestrator | 2025-05-30 00:52:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:46.607681 | orchestrator | 2025-05-30 00:52:46 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:46.610099 | orchestrator | 2025-05-30 00:52:46 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:46.610130 | orchestrator | 2025-05-30 00:52:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:49.651127 | orchestrator | 2025-05-30 00:52:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:49.651379 | orchestrator | 2025-05-30 00:52:49 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:49.651926 | orchestrator | 2025-05-30 00:52:49 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:49.652061 | orchestrator | 2025-05-30 00:52:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:52.701336 | orchestrator | 2025-05-30 00:52:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:52.702351 | orchestrator | 2025-05-30 00:52:52 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:52.702873 | orchestrator | 2025-05-30 00:52:52 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:52.702907 | orchestrator | 2025-05-30 00:52:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:55.754413 | orchestrator | 2025-05-30 00:52:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:55.754537 | orchestrator | 2025-05-30 00:52:55 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:55.762295 | orchestrator | 2025-05-30 00:52:55 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:55.762396 | orchestrator | 2025-05-30 00:52:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:52:58.808618 | orchestrator | 2025-05-30 00:52:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:52:58.809482 | orchestrator | 2025-05-30 00:52:58 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:52:58.810317 | orchestrator | 2025-05-30 00:52:58 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:52:58.810350 | orchestrator | 2025-05-30 00:52:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:01.856380 | orchestrator | 2025-05-30 00:53:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:01.856502 | orchestrator | 2025-05-30 00:53:01 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:01.856658 | orchestrator | 2025-05-30 00:53:01 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:01.856680 | orchestrator | 2025-05-30 00:53:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:04.887020 | orchestrator | 2025-05-30 00:53:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:04.887125 | orchestrator | 2025-05-30 00:53:04 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:04.890615 | orchestrator | 2025-05-30 00:53:04 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:04.890656 | orchestrator | 2025-05-30 00:53:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:07.940758 | orchestrator | 2025-05-30 00:53:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:07.941993 | orchestrator | 2025-05-30 00:53:07 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:07.945720 | orchestrator | 2025-05-30 00:53:07 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:07.946282 | orchestrator | 2025-05-30 00:53:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:10.995661 | orchestrator | 2025-05-30 00:53:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:10.996202 | orchestrator | 2025-05-30 00:53:10 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:10.996917 | orchestrator | 2025-05-30 00:53:10 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:10.996942 | orchestrator | 2025-05-30 00:53:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:14.050327 | orchestrator | 2025-05-30 00:53:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:14.051870 | orchestrator | 2025-05-30 00:53:14 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:14.054011 | orchestrator | 2025-05-30 00:53:14 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:14.054140 | orchestrator | 2025-05-30 00:53:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:17.105404 | orchestrator | 2025-05-30 00:53:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:17.106943 | orchestrator | 2025-05-30 00:53:17 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:17.107609 | orchestrator | 2025-05-30 00:53:17 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:17.107696 | orchestrator | 2025-05-30 00:53:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:20.157277 | orchestrator | 2025-05-30 00:53:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:20.158474 | orchestrator | 2025-05-30 00:53:20 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:20.159639 | orchestrator | 2025-05-30 00:53:20 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:20.159664 | orchestrator | 2025-05-30 00:53:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:23.209398 | orchestrator | 2025-05-30 00:53:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:23.210202 | orchestrator | 2025-05-30 00:53:23 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:23.212172 | orchestrator | 2025-05-30 00:53:23 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:23.212227 | orchestrator | 2025-05-30 00:53:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:26.274527 | orchestrator | 2025-05-30 00:53:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:26.277543 | orchestrator | 2025-05-30 00:53:26 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:26.279162 | orchestrator | 2025-05-30 00:53:26 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:26.279190 | orchestrator | 2025-05-30 00:53:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:29.346168 | orchestrator | 2025-05-30 00:53:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:29.349169 | orchestrator | 2025-05-30 00:53:29 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:29.351222 | orchestrator | 2025-05-30 00:53:29 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:29.351286 | orchestrator | 2025-05-30 00:53:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:32.403420 | orchestrator | 2025-05-30 00:53:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:32.409990 | orchestrator | 2025-05-30 00:53:32 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:32.410118 | orchestrator | 2025-05-30 00:53:32 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:32.410133 | orchestrator | 2025-05-30 00:53:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:35.475774 | orchestrator | 2025-05-30 00:53:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:35.478234 | orchestrator | 2025-05-30 00:53:35 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:35.481354 | orchestrator | 2025-05-30 00:53:35 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:35.482720 | orchestrator | 2025-05-30 00:53:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:38.542906 | orchestrator | 2025-05-30 00:53:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:38.543462 | orchestrator | 2025-05-30 00:53:38 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:38.544659 | orchestrator | 2025-05-30 00:53:38 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:38.544900 | orchestrator | 2025-05-30 00:53:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:41.598444 | orchestrator | 2025-05-30 00:53:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:41.599909 | orchestrator | 2025-05-30 00:53:41 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:41.602642 | orchestrator | 2025-05-30 00:53:41 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:41.603812 | orchestrator | 2025-05-30 00:53:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:44.658774 | orchestrator | 2025-05-30 00:53:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:44.660098 | orchestrator | 2025-05-30 00:53:44 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:44.661623 | orchestrator | 2025-05-30 00:53:44 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:44.661658 | orchestrator | 2025-05-30 00:53:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:47.712576 | orchestrator | 2025-05-30 00:53:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:47.717748 | orchestrator | 2025-05-30 00:53:47 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:47.717813 | orchestrator | 2025-05-30 00:53:47 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:47.717833 | orchestrator | 2025-05-30 00:53:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:50.772642 | orchestrator | 2025-05-30 00:53:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:50.772949 | orchestrator | 2025-05-30 00:53:50 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:50.777569 | orchestrator | 2025-05-30 00:53:50 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:50.777644 | orchestrator | 2025-05-30 00:53:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:53.821417 | orchestrator | 2025-05-30 00:53:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:53.824417 | orchestrator | 2025-05-30 00:53:53 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:53.828046 | orchestrator | 2025-05-30 00:53:53 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:53.828086 | orchestrator | 2025-05-30 00:53:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:56.882337 | orchestrator | 2025-05-30 00:53:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:56.884456 | orchestrator | 2025-05-30 00:53:56 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:56.884608 | orchestrator | 2025-05-30 00:53:56 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:56.884641 | orchestrator | 2025-05-30 00:53:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:53:59.945099 | orchestrator | 2025-05-30 00:53:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:53:59.947088 | orchestrator | 2025-05-30 00:53:59 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:53:59.950969 | orchestrator | 2025-05-30 00:53:59 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:53:59.951014 | orchestrator | 2025-05-30 00:53:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:02.995418 | orchestrator | 2025-05-30 00:54:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:02.999576 | orchestrator | 2025-05-30 00:54:02 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:02.999624 | orchestrator | 2025-05-30 00:54:02 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:02.999639 | orchestrator | 2025-05-30 00:54:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:06.042071 | orchestrator | 2025-05-30 00:54:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:06.042178 | orchestrator | 2025-05-30 00:54:06 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:06.042192 | orchestrator | 2025-05-30 00:54:06 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:06.042205 | orchestrator | 2025-05-30 00:54:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:09.084358 | orchestrator | 2025-05-30 00:54:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:09.084627 | orchestrator | 2025-05-30 00:54:09 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:09.085628 | orchestrator | 2025-05-30 00:54:09 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:09.085656 | orchestrator | 2025-05-30 00:54:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:12.133357 | orchestrator | 2025-05-30 00:54:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:12.134832 | orchestrator | 2025-05-30 00:54:12 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:12.135891 | orchestrator | 2025-05-30 00:54:12 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:12.135924 | orchestrator | 2025-05-30 00:54:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:15.185428 | orchestrator | 2025-05-30 00:54:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:15.186624 | orchestrator | 2025-05-30 00:54:15 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:15.188292 | orchestrator | 2025-05-30 00:54:15 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:15.188379 | orchestrator | 2025-05-30 00:54:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:18.232477 | orchestrator | 2025-05-30 00:54:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:18.233827 | orchestrator | 2025-05-30 00:54:18 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:18.236281 | orchestrator | 2025-05-30 00:54:18 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:18.236667 | orchestrator | 2025-05-30 00:54:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:21.284491 | orchestrator | 2025-05-30 00:54:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:21.286127 | orchestrator | 2025-05-30 00:54:21 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:21.287486 | orchestrator | 2025-05-30 00:54:21 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:21.287512 | orchestrator | 2025-05-30 00:54:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:24.347917 | orchestrator | 2025-05-30 00:54:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:24.349119 | orchestrator | 2025-05-30 00:54:24 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:24.350714 | orchestrator | 2025-05-30 00:54:24 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:24.350747 | orchestrator | 2025-05-30 00:54:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:27.399539 | orchestrator | 2025-05-30 00:54:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:27.399657 | orchestrator | 2025-05-30 00:54:27 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:27.400501 | orchestrator | 2025-05-30 00:54:27 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:27.402072 | orchestrator | 2025-05-30 00:54:27 | INFO  | Task 0df2cde8-5e54-429a-b36b-2bc842bfdcae is in state STARTED 2025-05-30 00:54:27.402353 | orchestrator | 2025-05-30 00:54:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:30.456362 | orchestrator | 2025-05-30 00:54:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:30.458456 | orchestrator | 2025-05-30 00:54:30 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:30.459735 | orchestrator | 2025-05-30 00:54:30 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:30.461312 | orchestrator | 2025-05-30 00:54:30 | INFO  | Task 0df2cde8-5e54-429a-b36b-2bc842bfdcae is in state STARTED 2025-05-30 00:54:30.461588 | orchestrator | 2025-05-30 00:54:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:33.507536 | orchestrator | 2025-05-30 00:54:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:33.508918 | orchestrator | 2025-05-30 00:54:33 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:33.511605 | orchestrator | 2025-05-30 00:54:33 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:33.512604 | orchestrator | 2025-05-30 00:54:33 | INFO  | Task 0df2cde8-5e54-429a-b36b-2bc842bfdcae is in state STARTED 2025-05-30 00:54:33.512701 | orchestrator | 2025-05-30 00:54:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:36.561978 | orchestrator | 2025-05-30 00:54:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:36.565162 | orchestrator | 2025-05-30 00:54:36 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:36.566654 | orchestrator | 2025-05-30 00:54:36 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:36.568449 | orchestrator | 2025-05-30 00:54:36 | INFO  | Task 0df2cde8-5e54-429a-b36b-2bc842bfdcae is in state STARTED 2025-05-30 00:54:36.568873 | orchestrator | 2025-05-30 00:54:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:39.614236 | orchestrator | 2025-05-30 00:54:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:39.616343 | orchestrator | 2025-05-30 00:54:39 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:39.618416 | orchestrator | 2025-05-30 00:54:39 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:39.619872 | orchestrator | 2025-05-30 00:54:39 | INFO  | Task 0df2cde8-5e54-429a-b36b-2bc842bfdcae is in state SUCCESS 2025-05-30 00:54:39.619896 | orchestrator | 2025-05-30 00:54:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:42.664937 | orchestrator | 2025-05-30 00:54:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:42.666102 | orchestrator | 2025-05-30 00:54:42 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:42.667207 | orchestrator | 2025-05-30 00:54:42 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:42.667240 | orchestrator | 2025-05-30 00:54:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:45.713608 | orchestrator | 2025-05-30 00:54:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:45.713906 | orchestrator | 2025-05-30 00:54:45 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:45.715255 | orchestrator | 2025-05-30 00:54:45 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:45.715283 | orchestrator | 2025-05-30 00:54:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:48.766279 | orchestrator | 2025-05-30 00:54:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:48.767376 | orchestrator | 2025-05-30 00:54:48 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:48.775608 | orchestrator | 2025-05-30 00:54:48 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:48.775674 | orchestrator | 2025-05-30 00:54:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:51.820196 | orchestrator | 2025-05-30 00:54:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:51.821048 | orchestrator | 2025-05-30 00:54:51 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:51.821754 | orchestrator | 2025-05-30 00:54:51 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:51.821781 | orchestrator | 2025-05-30 00:54:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:54.871601 | orchestrator | 2025-05-30 00:54:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:54.873797 | orchestrator | 2025-05-30 00:54:54 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:54.876333 | orchestrator | 2025-05-30 00:54:54 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:54.876553 | orchestrator | 2025-05-30 00:54:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:54:57.928238 | orchestrator | 2025-05-30 00:54:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:54:57.930193 | orchestrator | 2025-05-30 00:54:57 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state STARTED 2025-05-30 00:54:57.930227 | orchestrator | 2025-05-30 00:54:57 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:54:57.930240 | orchestrator | 2025-05-30 00:54:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:00.989978 | orchestrator | 2025-05-30 00:55:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:00.990117 | orchestrator | 2025-05-30 00:55:00 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:00.992143 | orchestrator | 2025-05-30 00:55:00 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:00.998141 | orchestrator | 2025-05-30 00:55:00 | INFO  | Task 55208b3f-66d1-46ee-8d7d-87c50565a6ea is in state SUCCESS 2025-05-30 00:55:00.999600 | orchestrator | 2025-05-30 00:55:00.999736 | orchestrator | None 2025-05-30 00:55:00.999792 | orchestrator | 2025-05-30 00:55:00.999804 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 00:55:00.999815 | orchestrator | 2025-05-30 00:55:00.999827 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 00:55:00.999940 | orchestrator | Friday 30 May 2025 00:47:38 +0000 (0:00:00.362) 0:00:00.362 ************ 2025-05-30 00:55:00.999953 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:00.999965 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:00.999976 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:00.999987 | orchestrator | 2025-05-30 00:55:00.999998 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 00:55:01.000009 | orchestrator | Friday 30 May 2025 00:47:39 +0000 (0:00:00.298) 0:00:00.660 ************ 2025-05-30 00:55:01.000022 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-30 00:55:01.000033 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-30 00:55:01.000044 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-30 00:55:01.000055 | orchestrator | 2025-05-30 00:55:01.000066 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-30 00:55:01.000077 | orchestrator | 2025-05-30 00:55:01.000088 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-30 00:55:01.000099 | orchestrator | Friday 30 May 2025 00:47:39 +0000 (0:00:00.257) 0:00:00.918 ************ 2025-05-30 00:55:01.000110 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.000122 | orchestrator | 2025-05-30 00:55:01.000133 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-30 00:55:01.000145 | orchestrator | Friday 30 May 2025 00:47:40 +0000 (0:00:00.652) 0:00:01.571 ************ 2025-05-30 00:55:01.000156 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.000167 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.000178 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.000189 | orchestrator | 2025-05-30 00:55:01.000288 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-30 00:55:01.000374 | orchestrator | Friday 30 May 2025 00:47:40 +0000 (0:00:00.692) 0:00:02.263 ************ 2025-05-30 00:55:01.000388 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.000551 | orchestrator | 2025-05-30 00:55:01.000570 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-30 00:55:01.000581 | orchestrator | Friday 30 May 2025 00:47:41 +0000 (0:00:00.560) 0:00:02.824 ************ 2025-05-30 00:55:01.000592 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.000603 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.000614 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.000625 | orchestrator | 2025-05-30 00:55:01.000636 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-30 00:55:01.000647 | orchestrator | Friday 30 May 2025 00:47:42 +0000 (0:00:00.879) 0:00:03.703 ************ 2025-05-30 00:55:01.000658 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-30 00:55:01.000668 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-30 00:55:01.000679 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-30 00:55:01.000689 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-30 00:55:01.000700 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-30 00:55:01.000711 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-30 00:55:01.000721 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-30 00:55:01.000732 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-30 00:55:01.000743 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-30 00:55:01.000754 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-30 00:55:01.000765 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-30 00:55:01.000775 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-30 00:55:01.000786 | orchestrator | 2025-05-30 00:55:01.000797 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-30 00:55:01.000807 | orchestrator | Friday 30 May 2025 00:47:44 +0000 (0:00:02.375) 0:00:06.078 ************ 2025-05-30 00:55:01.000819 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-30 00:55:01.000830 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-30 00:55:01.000841 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-30 00:55:01.000852 | orchestrator | 2025-05-30 00:55:01.000883 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-30 00:55:01.000894 | orchestrator | Friday 30 May 2025 00:47:45 +0000 (0:00:00.972) 0:00:07.051 ************ 2025-05-30 00:55:01.000905 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-30 00:55:01.000916 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-30 00:55:01.000926 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-30 00:55:01.000937 | orchestrator | 2025-05-30 00:55:01.000948 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-30 00:55:01.000958 | orchestrator | Friday 30 May 2025 00:47:47 +0000 (0:00:01.730) 0:00:08.782 ************ 2025-05-30 00:55:01.000969 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-30 00:55:01.000980 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.001042 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-30 00:55:01.001057 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.001068 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-30 00:55:01.001079 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.001090 | orchestrator | 2025-05-30 00:55:01.001100 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-30 00:55:01.001122 | orchestrator | Friday 30 May 2025 00:47:48 +0000 (0:00:00.844) 0:00:09.626 ************ 2025-05-30 00:55:01.001136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.001181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.001195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.001207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.001219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.001283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.001305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-30 00:55:01.001346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.001368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-30 00:55:01.001390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.001410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-30 00:55:01.001431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.001452 | orchestrator | 2025-05-30 00:55:01.001472 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-30 00:55:01.001504 | orchestrator | Friday 30 May 2025 00:47:50 +0000 (0:00:02.000) 0:00:11.627 ************ 2025-05-30 00:55:01.001525 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.001545 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.001565 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.001585 | orchestrator | 2025-05-30 00:55:01.001607 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-30 00:55:01.001675 | orchestrator | Friday 30 May 2025 00:47:51 +0000 (0:00:01.619) 0:00:13.246 ************ 2025-05-30 00:55:01.001687 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-30 00:55:01.001698 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-30 00:55:01.001709 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-30 00:55:01.001720 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-30 00:55:01.001731 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-30 00:55:01.001742 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-30 00:55:01.001753 | orchestrator | 2025-05-30 00:55:01.001764 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-30 00:55:01.001775 | orchestrator | Friday 30 May 2025 00:47:55 +0000 (0:00:03.644) 0:00:16.891 ************ 2025-05-30 00:55:01.001785 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.001796 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.001807 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.001818 | orchestrator | 2025-05-30 00:55:01.001829 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-30 00:55:01.001840 | orchestrator | Friday 30 May 2025 00:47:58 +0000 (0:00:03.346) 0:00:20.238 ************ 2025-05-30 00:55:01.001851 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.001898 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.001919 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.001938 | orchestrator | 2025-05-30 00:55:01.001956 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-30 00:55:01.001981 | orchestrator | Friday 30 May 2025 00:48:02 +0000 (0:00:03.196) 0:00:23.434 ************ 2025-05-30 00:55:01.001993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-30 00:55:01.002006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-30 00:55:01.002143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-30 00:55:01.002172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-30 00:55:01.002194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-30 00:55:01.002207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-30 00:55:01.002283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-30 00:55:01.002299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-30 00:55:01.002310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-30 00:55:01.002322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.002340 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.002352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.002364 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.002383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.002395 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.002406 | orchestrator | 2025-05-30 00:55:01.002417 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-30 00:55:01.002428 | orchestrator | Friday 30 May 2025 00:48:04 +0000 (0:00:02.015) 0:00:25.450 ************ 2025-05-30 00:55:01.002445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.002457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.002468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.002500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.002518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.002530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-30 00:55:01.002542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-30 00:55:01.002558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.002570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.002607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.002620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-30 00:55:01.002667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.002681 | orchestrator | 2025-05-30 00:55:01.002693 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-30 00:55:01.002704 | orchestrator | Friday 30 May 2025 00:48:09 +0000 (0:00:05.215) 0:00:30.665 ************ 2025-05-30 00:55:01.002715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.002745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.002757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.002776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.002787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.002815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.002826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-30 00:55:01.002843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.002854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-30 00:55:01.002995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.003008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-30 00:55:01.003020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.003031 | orchestrator | 2025-05-30 00:55:01.003042 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-30 00:55:01.003053 | orchestrator | Friday 30 May 2025 00:48:12 +0000 (0:00:03.103) 0:00:33.768 ************ 2025-05-30 00:55:01.003072 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-30 00:55:01.003084 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-30 00:55:01.003095 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-30 00:55:01.003105 | orchestrator | 2025-05-30 00:55:01.003133 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-30 00:55:01.003144 | orchestrator | Friday 30 May 2025 00:48:14 +0000 (0:00:01.991) 0:00:35.759 ************ 2025-05-30 00:55:01.003155 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-30 00:55:01.003166 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-30 00:55:01.003177 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-30 00:55:01.003188 | orchestrator | 2025-05-30 00:55:01.003199 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-30 00:55:01.003210 | orchestrator | Friday 30 May 2025 00:48:17 +0000 (0:00:03.112) 0:00:38.871 ************ 2025-05-30 00:55:01.003221 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.003232 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.003243 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.003254 | orchestrator | 2025-05-30 00:55:01.003271 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-30 00:55:01.003282 | orchestrator | Friday 30 May 2025 00:48:19 +0000 (0:00:02.350) 0:00:41.222 ************ 2025-05-30 00:55:01.003315 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-30 00:55:01.003328 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-30 00:55:01.003338 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-30 00:55:01.003349 | orchestrator | 2025-05-30 00:55:01.003361 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-30 00:55:01.003372 | orchestrator | Friday 30 May 2025 00:48:22 +0000 (0:00:02.846) 0:00:44.069 ************ 2025-05-30 00:55:01.003382 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-30 00:55:01.003393 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-30 00:55:01.003404 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-30 00:55:01.003415 | orchestrator | 2025-05-30 00:55:01.003426 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-30 00:55:01.003436 | orchestrator | Friday 30 May 2025 00:48:24 +0000 (0:00:02.280) 0:00:46.350 ************ 2025-05-30 00:55:01.003446 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-30 00:55:01.003456 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-30 00:55:01.003465 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-30 00:55:01.003475 | orchestrator | 2025-05-30 00:55:01.003485 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-30 00:55:01.003554 | orchestrator | Friday 30 May 2025 00:48:27 +0000 (0:00:02.554) 0:00:48.905 ************ 2025-05-30 00:55:01.003564 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-30 00:55:01.003574 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-30 00:55:01.003583 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-30 00:55:01.003593 | orchestrator | 2025-05-30 00:55:01.003603 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-30 00:55:01.003644 | orchestrator | Friday 30 May 2025 00:48:30 +0000 (0:00:03.111) 0:00:52.017 ************ 2025-05-30 00:55:01.003655 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.003665 | orchestrator | 2025-05-30 00:55:01.003675 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-30 00:55:01.003685 | orchestrator | Friday 30 May 2025 00:48:31 +0000 (0:00:00.809) 0:00:52.827 ************ 2025-05-30 00:55:01.003695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.003713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.003746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.003758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.003768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.003778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.003789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-30 00:55:01.003805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-30 00:55:01.003816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-30 00:55:01.003832 | orchestrator | 2025-05-30 00:55:01.003842 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-30 00:55:01.003853 | orchestrator | Friday 30 May 2025 00:48:34 +0000 (0:00:03.491) 0:00:56.318 ************ 2025-05-30 00:55:01.003918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-30 00:55:01.003941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-30 00:55:01.003958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-30 00:55:01.003972 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.003986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-30 00:55:01.004003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-30 00:55:01.004043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-30 00:55:01.004062 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.004079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-30 00:55:01.004123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-30 00:55:01.004136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-30 00:55:01.004146 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.004156 | orchestrator | 2025-05-30 00:55:01.004166 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-30 00:55:01.004175 | orchestrator | Friday 30 May 2025 00:48:35 +0000 (0:00:00.978) 0:00:57.297 ************ 2025-05-30 00:55:01.004185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-30 00:55:01.004196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-30 00:55:01.004219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-30 00:55:01.004229 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.004239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-30 00:55:01.004254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-30 00:55:01.004264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-30 00:55:01.004274 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.004284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-30 00:55:01.004294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-30 00:55:01.004314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-30 00:55:01.004324 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.004334 | orchestrator | 2025-05-30 00:55:01.004344 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-30 00:55:01.004358 | orchestrator | Friday 30 May 2025 00:48:37 +0000 (0:00:01.193) 0:00:58.490 ************ 2025-05-30 00:55:01.004368 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-30 00:55:01.004378 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-30 00:55:01.004388 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-30 00:55:01.004397 | orchestrator | 2025-05-30 00:55:01.004407 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-30 00:55:01.004417 | orchestrator | Friday 30 May 2025 00:48:39 +0000 (0:00:02.162) 0:01:00.653 ************ 2025-05-30 00:55:01.004426 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-30 00:55:01.004436 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-30 00:55:01.004445 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-30 00:55:01.004455 | orchestrator | 2025-05-30 00:55:01.004464 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-30 00:55:01.004474 | orchestrator | Friday 30 May 2025 00:48:41 +0000 (0:00:01.969) 0:01:02.623 ************ 2025-05-30 00:55:01.004499 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-30 00:55:01.004510 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-30 00:55:01.004520 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-30 00:55:01.004529 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-30 00:55:01.004539 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.004552 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-30 00:55:01.004569 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.004587 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-30 00:55:01.004604 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.004618 | orchestrator | 2025-05-30 00:55:01.004628 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-30 00:55:01.004638 | orchestrator | Friday 30 May 2025 00:48:44 +0000 (0:00:02.895) 0:01:05.518 ************ 2025-05-30 00:55:01.004648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.004666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.004676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-30 00:55:01.004693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.004704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.004718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-30 00:55:01.004728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-30 00:55:01.004739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.004768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-30 00:55:01.004784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.004795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-30 00:55:01.004805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9', '__omit_place_holder__3783bbad334270ba332e9b7d2cb580ffd4fa80a9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-30 00:55:01.004815 | orchestrator | 2025-05-30 00:55:01.004825 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-30 00:55:01.004834 | orchestrator | Friday 30 May 2025 00:48:47 +0000 (0:00:03.203) 0:01:08.722 ************ 2025-05-30 00:55:01.004844 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.004854 | orchestrator | 2025-05-30 00:55:01.004880 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-30 00:55:01.004890 | orchestrator | Friday 30 May 2025 00:48:48 +0000 (0:00:00.774) 0:01:09.496 ************ 2025-05-30 00:55:01.004902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-30 00:55:01.004932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.004944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.004977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.004994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-30 00:55:01.005005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.005015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-30 00:55:01.005057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.005068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005093 | orchestrator | 2025-05-30 00:55:01.005103 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-30 00:55:01.005113 | orchestrator | Friday 30 May 2025 00:48:52 +0000 (0:00:04.061) 0:01:13.558 ************ 2025-05-30 00:55:01.005140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-30 00:55:01.005151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.005161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005187 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.005197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-30 00:55:01.005221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.005248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005269 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.005279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-30 00:55:01.005295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.005305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005384 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.005394 | orchestrator | 2025-05-30 00:55:01.005404 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-30 00:55:01.005414 | orchestrator | Friday 30 May 2025 00:48:53 +0000 (0:00:00.915) 0:01:14.473 ************ 2025-05-30 00:55:01.005424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-30 00:55:01.005435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-30 00:55:01.005445 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.005455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-30 00:55:01.005464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-30 00:55:01.005474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-30 00:55:01.005484 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.005554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-30 00:55:01.005566 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.005576 | orchestrator | 2025-05-30 00:55:01.005585 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-30 00:55:01.005595 | orchestrator | Friday 30 May 2025 00:48:54 +0000 (0:00:01.194) 0:01:15.668 ************ 2025-05-30 00:55:01.005605 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.005615 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.005624 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.005634 | orchestrator | 2025-05-30 00:55:01.005643 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-30 00:55:01.005653 | orchestrator | Friday 30 May 2025 00:48:55 +0000 (0:00:01.335) 0:01:17.004 ************ 2025-05-30 00:55:01.005662 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.005672 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.005682 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.005691 | orchestrator | 2025-05-30 00:55:01.005701 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-30 00:55:01.005710 | orchestrator | Friday 30 May 2025 00:48:57 +0000 (0:00:02.135) 0:01:19.139 ************ 2025-05-30 00:55:01.005720 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.005729 | orchestrator | 2025-05-30 00:55:01.005739 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-30 00:55:01.005749 | orchestrator | Friday 30 May 2025 00:48:58 +0000 (0:00:00.895) 0:01:20.034 ************ 2025-05-30 00:55:01.005767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.005815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.005849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.005976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.005990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.006000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.006100 | orchestrator | 2025-05-30 00:55:01.006116 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-30 00:55:01.006126 | orchestrator | Friday 30 May 2025 00:49:03 +0000 (0:00:05.138) 0:01:25.173 ************ 2025-05-30 00:55:01.006136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.006157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.006236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.006247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.006255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.006263 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.006272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.006280 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.006956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.006998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.007012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.007021 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.007029 | orchestrator | 2025-05-30 00:55:01.007037 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-30 00:55:01.007046 | orchestrator | Friday 30 May 2025 00:49:04 +0000 (0:00:00.684) 0:01:25.858 ************ 2025-05-30 00:55:01.007054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-30 00:55:01.007063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-30 00:55:01.007071 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.007079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-30 00:55:01.007087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-30 00:55:01.007095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-30 00:55:01.007104 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.007112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-30 00:55:01.007120 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.007128 | orchestrator | 2025-05-30 00:55:01.007140 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-30 00:55:01.007149 | orchestrator | Friday 30 May 2025 00:49:05 +0000 (0:00:01.092) 0:01:26.950 ************ 2025-05-30 00:55:01.007156 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.007164 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.007172 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.007180 | orchestrator | 2025-05-30 00:55:01.007216 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-30 00:55:01.007226 | orchestrator | Friday 30 May 2025 00:49:06 +0000 (0:00:01.264) 0:01:28.215 ************ 2025-05-30 00:55:01.007233 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.007242 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.007249 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.007257 | orchestrator | 2025-05-30 00:55:01.007265 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-30 00:55:01.007273 | orchestrator | Friday 30 May 2025 00:49:09 +0000 (0:00:02.174) 0:01:30.389 ************ 2025-05-30 00:55:01.007281 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.007289 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.007297 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.007304 | orchestrator | 2025-05-30 00:55:01.007318 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-30 00:55:01.007327 | orchestrator | Friday 30 May 2025 00:49:09 +0000 (0:00:00.432) 0:01:30.822 ************ 2025-05-30 00:55:01.007335 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.007343 | orchestrator | 2025-05-30 00:55:01.007351 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-30 00:55:01.007358 | orchestrator | Friday 30 May 2025 00:49:11 +0000 (0:00:02.114) 0:01:32.936 ************ 2025-05-30 00:55:01.007371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-30 00:55:01.007381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-30 00:55:01.007389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-30 00:55:01.007403 | orchestrator | 2025-05-30 00:55:01.007411 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-30 00:55:01.007419 | orchestrator | Friday 30 May 2025 00:49:14 +0000 (0:00:03.292) 0:01:36.228 ************ 2025-05-30 00:55:01.007427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-30 00:55:01.007435 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.007448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-30 00:55:01.007456 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.007468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-30 00:55:01.007476 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.007484 | orchestrator | 2025-05-30 00:55:01.007493 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-30 00:55:01.007502 | orchestrator | Friday 30 May 2025 00:49:16 +0000 (0:00:01.326) 0:01:37.554 ************ 2025-05-30 00:55:01.007513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-30 00:55:01.007538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-30 00:55:01.007549 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.007591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-30 00:55:01.007602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-30 00:55:01.007612 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.007622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-30 00:55:01.007637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-30 00:55:01.007647 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.007656 | orchestrator | 2025-05-30 00:55:01.007665 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-30 00:55:01.007675 | orchestrator | Friday 30 May 2025 00:49:18 +0000 (0:00:02.397) 0:01:39.952 ************ 2025-05-30 00:55:01.007685 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.007694 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.007703 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.007713 | orchestrator | 2025-05-30 00:55:01.007722 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-30 00:55:01.007731 | orchestrator | Friday 30 May 2025 00:49:19 +0000 (0:00:00.742) 0:01:40.695 ************ 2025-05-30 00:55:01.007741 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.007750 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.007759 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.007769 | orchestrator | 2025-05-30 00:55:01.007821 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-30 00:55:01.007831 | orchestrator | Friday 30 May 2025 00:49:20 +0000 (0:00:01.177) 0:01:41.873 ************ 2025-05-30 00:55:01.007840 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.007848 | orchestrator | 2025-05-30 00:55:01.007876 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-30 00:55:01.007884 | orchestrator | Friday 30 May 2025 00:49:21 +0000 (0:00:00.945) 0:01:42.818 ************ 2025-05-30 00:55:01.007893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.007919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.007928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.007942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.007955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.007964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.007986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.007996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008073 | orchestrator | 2025-05-30 00:55:01.008082 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-30 00:55:01.008090 | orchestrator | Friday 30 May 2025 00:49:25 +0000 (0:00:04.074) 0:01:46.892 ************ 2025-05-30 00:55:01.008099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.008107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008164 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.008172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.008181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008243 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.008269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.008279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008304 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.008312 | orchestrator | 2025-05-30 00:55:01.008320 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-30 00:55:01.008328 | orchestrator | Friday 30 May 2025 00:49:26 +0000 (0:00:01.008) 0:01:47.900 ************ 2025-05-30 00:55:01.008336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-30 00:55:01.008350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-30 00:55:01.008359 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.008372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-30 00:55:01.008418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-30 00:55:01.008427 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.008435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-30 00:55:01.008456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-30 00:55:01.008465 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.008473 | orchestrator | 2025-05-30 00:55:01.008481 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-30 00:55:01.008489 | orchestrator | Friday 30 May 2025 00:49:27 +0000 (0:00:01.055) 0:01:48.955 ************ 2025-05-30 00:55:01.008497 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.008505 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.008512 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.008520 | orchestrator | 2025-05-30 00:55:01.008528 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-30 00:55:01.008536 | orchestrator | Friday 30 May 2025 00:49:29 +0000 (0:00:01.497) 0:01:50.453 ************ 2025-05-30 00:55:01.008544 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.008552 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.008560 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.008567 | orchestrator | 2025-05-30 00:55:01.008575 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-30 00:55:01.008583 | orchestrator | Friday 30 May 2025 00:49:31 +0000 (0:00:02.272) 0:01:52.726 ************ 2025-05-30 00:55:01.008591 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.008599 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.008607 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.008615 | orchestrator | 2025-05-30 00:55:01.008622 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-30 00:55:01.008630 | orchestrator | Friday 30 May 2025 00:49:31 +0000 (0:00:00.435) 0:01:53.161 ************ 2025-05-30 00:55:01.008638 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.008646 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.008654 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.008662 | orchestrator | 2025-05-30 00:55:01.008670 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-30 00:55:01.008678 | orchestrator | Friday 30 May 2025 00:49:32 +0000 (0:00:00.687) 0:01:53.849 ************ 2025-05-30 00:55:01.008686 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.008693 | orchestrator | 2025-05-30 00:55:01.008720 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-30 00:55:01.008728 | orchestrator | Friday 30 May 2025 00:49:33 +0000 (0:00:01.015) 0:01:54.864 ************ 2025-05-30 00:55:01.008737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 00:55:01.008756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 00:55:01.008766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 00:55:01.008841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 00:55:01.008939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 00:55:01.008954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 00:55:01.008963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.008971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009135 | orchestrator | 2025-05-30 00:55:01.009145 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-30 00:55:01.009152 | orchestrator | Friday 30 May 2025 00:49:38 +0000 (0:00:05.295) 0:02:00.160 ************ 2025-05-30 00:55:01.009159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 00:55:01.009176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 00:55:01.009185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 00:55:01.009199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 00:55:01.009206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 00:55:01.009275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 00:55:01.009310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009325 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.009332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009940 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.009953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.009961 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.009968 | orchestrator | 2025-05-30 00:55:01.009975 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-30 00:55:01.009982 | orchestrator | Friday 30 May 2025 00:49:39 +0000 (0:00:00.896) 0:02:01.056 ************ 2025-05-30 00:55:01.010012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-30 00:55:01.010091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-30 00:55:01.010099 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.010105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-30 00:55:01.010112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-30 00:55:01.010119 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.010126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-30 00:55:01.010132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-30 00:55:01.010139 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.010180 | orchestrator | 2025-05-30 00:55:01.010189 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-30 00:55:01.010196 | orchestrator | Friday 30 May 2025 00:49:40 +0000 (0:00:01.280) 0:02:02.337 ************ 2025-05-30 00:55:01.010203 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.010209 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.010217 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.010229 | orchestrator | 2025-05-30 00:55:01.010240 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-30 00:55:01.010252 | orchestrator | Friday 30 May 2025 00:49:42 +0000 (0:00:01.231) 0:02:03.569 ************ 2025-05-30 00:55:01.010263 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.010275 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.010286 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.010298 | orchestrator | 2025-05-30 00:55:01.010309 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-30 00:55:01.010321 | orchestrator | Friday 30 May 2025 00:49:44 +0000 (0:00:02.386) 0:02:05.955 ************ 2025-05-30 00:55:01.010332 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.010344 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.010355 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.010366 | orchestrator | 2025-05-30 00:55:01.010378 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-30 00:55:01.010390 | orchestrator | Friday 30 May 2025 00:49:45 +0000 (0:00:00.440) 0:02:06.396 ************ 2025-05-30 00:55:01.010401 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.010411 | orchestrator | 2025-05-30 00:55:01.010417 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-30 00:55:01.010424 | orchestrator | Friday 30 May 2025 00:49:46 +0000 (0:00:01.043) 0:02:07.439 ************ 2025-05-30 00:55:01.010459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 00:55:01.010490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 00:55:01.010518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 00:55:01.010560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 00:55:01.010577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 00:55:01.010607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 00:55:01.010623 | orchestrator | 2025-05-30 00:55:01.010635 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-30 00:55:01.010648 | orchestrator | Friday 30 May 2025 00:49:50 +0000 (0:00:04.912) 0:02:12.352 ************ 2025-05-30 00:55:01.010661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-30 00:55:01.010679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 00:55:01.010688 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.010740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-30 00:55:01.010774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-30 00:55:01.010794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 00:55:01.010806 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.010831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 00:55:01.010874 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.010882 | orchestrator | 2025-05-30 00:55:01.010889 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-30 00:55:01.010896 | orchestrator | Friday 30 May 2025 00:49:55 +0000 (0:00:04.064) 0:02:16.416 ************ 2025-05-30 00:55:01.010904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-30 00:55:01.010911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-30 00:55:01.010918 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.010926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-30 00:55:01.010936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-30 00:55:01.010948 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.010967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-30 00:55:01.010988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-30 00:55:01.011000 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.011012 | orchestrator | 2025-05-30 00:55:01.011023 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-30 00:55:01.011035 | orchestrator | Friday 30 May 2025 00:50:00 +0000 (0:00:05.538) 0:02:21.954 ************ 2025-05-30 00:55:01.011047 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.011059 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.011070 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.011082 | orchestrator | 2025-05-30 00:55:01.011093 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-30 00:55:01.011103 | orchestrator | Friday 30 May 2025 00:50:01 +0000 (0:00:01.131) 0:02:23.086 ************ 2025-05-30 00:55:01.011109 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.011116 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.011123 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.011129 | orchestrator | 2025-05-30 00:55:01.011136 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-30 00:55:01.011143 | orchestrator | Friday 30 May 2025 00:50:03 +0000 (0:00:02.004) 0:02:25.091 ************ 2025-05-30 00:55:01.011149 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.011156 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.011162 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.011194 | orchestrator | 2025-05-30 00:55:01.011203 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-30 00:55:01.011209 | orchestrator | Friday 30 May 2025 00:50:04 +0000 (0:00:00.484) 0:02:25.575 ************ 2025-05-30 00:55:01.011216 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.011223 | orchestrator | 2025-05-30 00:55:01.011230 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-30 00:55:01.011261 | orchestrator | Friday 30 May 2025 00:50:05 +0000 (0:00:01.145) 0:02:26.721 ************ 2025-05-30 00:55:01.011270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 00:55:01.011278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 00:55:01.011333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 00:55:01.011341 | orchestrator | 2025-05-30 00:55:01.011348 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-30 00:55:01.011354 | orchestrator | Friday 30 May 2025 00:50:10 +0000 (0:00:04.853) 0:02:31.574 ************ 2025-05-30 00:55:01.011382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-30 00:55:01.011390 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.011397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-30 00:55:01.011405 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.011412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-30 00:55:01.011419 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.011425 | orchestrator | 2025-05-30 00:55:01.011432 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-30 00:55:01.011439 | orchestrator | Friday 30 May 2025 00:50:10 +0000 (0:00:00.535) 0:02:32.110 ************ 2025-05-30 00:55:01.011446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-30 00:55:01.011462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-30 00:55:01.011474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-30 00:55:01.011487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-30 00:55:01.011499 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.011508 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.011515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-30 00:55:01.011522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-30 00:55:01.011529 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.011535 | orchestrator | 2025-05-30 00:55:01.011551 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-30 00:55:01.011558 | orchestrator | Friday 30 May 2025 00:50:11 +0000 (0:00:00.861) 0:02:32.971 ************ 2025-05-30 00:55:01.011565 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.011571 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.011578 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.011584 | orchestrator | 2025-05-30 00:55:01.011591 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-30 00:55:01.011598 | orchestrator | Friday 30 May 2025 00:50:12 +0000 (0:00:01.216) 0:02:34.188 ************ 2025-05-30 00:55:01.011604 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.011611 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.011617 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.011624 | orchestrator | 2025-05-30 00:55:01.011635 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-30 00:55:01.011642 | orchestrator | Friday 30 May 2025 00:50:14 +0000 (0:00:02.057) 0:02:36.245 ************ 2025-05-30 00:55:01.011649 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.011655 | orchestrator | 2025-05-30 00:55:01.011662 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-30 00:55:01.011669 | orchestrator | Friday 30 May 2025 00:50:16 +0000 (0:00:01.290) 0:02:37.536 ************ 2025-05-30 00:55:01.011689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.011698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.011710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.011717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.011729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.011760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.011782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.011790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.011797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.011804 | orchestrator | 2025-05-30 00:55:01.011811 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-30 00:55:01.011818 | orchestrator | Friday 30 May 2025 00:50:23 +0000 (0:00:07.328) 0:02:44.865 ************ 2025-05-30 00:55:01.011829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.011840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.011926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.011937 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.011944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.011951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.011963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.011971 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.011984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.011996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.012003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.012010 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.012017 | orchestrator | 2025-05-30 00:55:01.012024 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-30 00:55:01.012031 | orchestrator | Friday 30 May 2025 00:50:24 +0000 (0:00:00.690) 0:02:45.556 ************ 2025-05-30 00:55:01.012043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-30 00:55:01.012080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-30 00:55:01.012094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-30 00:55:01.012102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-30 00:55:01.012109 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.012116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-30 00:55:01.012123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-30 00:55:01.012134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-30 00:55:01.012141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-30 00:55:01.012176 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.012189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-30 00:55:01.012201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-30 00:55:01.012213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-30 00:55:01.012223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-30 00:55:01.012235 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.012246 | orchestrator | 2025-05-30 00:55:01.012257 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-30 00:55:01.012269 | orchestrator | Friday 30 May 2025 00:50:25 +0000 (0:00:01.320) 0:02:46.876 ************ 2025-05-30 00:55:01.012281 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.012293 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.012305 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.012312 | orchestrator | 2025-05-30 00:55:01.012319 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-30 00:55:01.012326 | orchestrator | Friday 30 May 2025 00:50:27 +0000 (0:00:01.636) 0:02:48.512 ************ 2025-05-30 00:55:01.012333 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.012339 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.012346 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.012352 | orchestrator | 2025-05-30 00:55:01.012359 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-30 00:55:01.012366 | orchestrator | Friday 30 May 2025 00:50:29 +0000 (0:00:02.613) 0:02:51.126 ************ 2025-05-30 00:55:01.012372 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.012379 | orchestrator | 2025-05-30 00:55:01.012386 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-30 00:55:01.012392 | orchestrator | Friday 30 May 2025 00:50:30 +0000 (0:00:01.216) 0:02:52.342 ************ 2025-05-30 00:55:01.012407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-30 00:55:01.012425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-30 00:55:01.012440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-30 00:55:01.012451 | orchestrator | 2025-05-30 00:55:01.012458 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-30 00:55:01.012522 | orchestrator | Friday 30 May 2025 00:50:35 +0000 (0:00:04.889) 0:02:57.232 ************ 2025-05-30 00:55:01.012530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-30 00:55:01.012537 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.014185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-30 00:55:01.014285 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.014303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-30 00:55:01.014315 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.014331 | orchestrator | 2025-05-30 00:55:01.014340 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-30 00:55:01.014350 | orchestrator | Friday 30 May 2025 00:50:37 +0000 (0:00:01.158) 0:02:58.390 ************ 2025-05-30 00:55:01.014359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-30 00:55:01.014382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-30 00:55:01.014393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-30 00:55:01.014403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-30 00:55:01.014412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-30 00:55:01.014422 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.014431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-30 00:55:01.014440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-30 00:55:01.014449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-30 00:55:01.014458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-30 00:55:01.014467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-30 00:55:01.014475 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.014484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-30 00:55:01.014543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-30 00:55:01.014586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-30 00:55:01.014603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-30 00:55:01.014612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-30 00:55:01.014622 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.014631 | orchestrator | 2025-05-30 00:55:01.014648 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-30 00:55:01.014657 | orchestrator | Friday 30 May 2025 00:50:38 +0000 (0:00:01.334) 0:02:59.724 ************ 2025-05-30 00:55:01.014666 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.014674 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.014683 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.014692 | orchestrator | 2025-05-30 00:55:01.014700 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-30 00:55:01.014709 | orchestrator | Friday 30 May 2025 00:50:39 +0000 (0:00:01.399) 0:03:01.124 ************ 2025-05-30 00:55:01.014735 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.014744 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.014753 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.014762 | orchestrator | 2025-05-30 00:55:01.014771 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-30 00:55:01.014780 | orchestrator | Friday 30 May 2025 00:50:42 +0000 (0:00:02.561) 0:03:03.685 ************ 2025-05-30 00:55:01.014788 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.014797 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.014806 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.014814 | orchestrator | 2025-05-30 00:55:01.014823 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-30 00:55:01.014832 | orchestrator | Friday 30 May 2025 00:50:42 +0000 (0:00:00.464) 0:03:04.150 ************ 2025-05-30 00:55:01.014841 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.014849 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.014873 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.014882 | orchestrator | 2025-05-30 00:55:01.014891 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-30 00:55:01.014900 | orchestrator | Friday 30 May 2025 00:50:43 +0000 (0:00:00.294) 0:03:04.445 ************ 2025-05-30 00:55:01.014908 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.014917 | orchestrator | 2025-05-30 00:55:01.014926 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-30 00:55:01.014934 | orchestrator | Friday 30 May 2025 00:50:44 +0000 (0:00:01.364) 0:03:05.810 ************ 2025-05-30 00:55:01.014944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 00:55:01.014971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 00:55:01.014988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 00:55:01.015006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-30 00:55:01.015016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 00:55:01.015025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-30 00:55:01.015040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 00:55:01.015050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 00:55:01.015064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-30 00:55:01.015073 | orchestrator | 2025-05-30 00:55:01.015082 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-30 00:55:01.015092 | orchestrator | Friday 30 May 2025 00:50:48 +0000 (0:00:04.211) 0:03:10.022 ************ 2025-05-30 00:55:01.015105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-30 00:55:01.015115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 00:55:01.015129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-30 00:55:01.015138 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.015147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-30 00:55:01.015162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 00:55:01.015176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-30 00:55:01.015185 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.015194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-30 00:55:01.015209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 00:55:01.015218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-30 00:55:01.015227 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.015236 | orchestrator | 2025-05-30 00:55:01.015245 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-30 00:55:01.015254 | orchestrator | Friday 30 May 2025 00:50:49 +0000 (0:00:00.786) 0:03:10.808 ************ 2025-05-30 00:55:01.015264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-30 00:55:01.015274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-30 00:55:01.015282 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.015296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-30 00:55:01.015305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-30 00:55:01.015318 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.015327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-30 00:55:01.015336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-30 00:55:01.015360 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.015369 | orchestrator | 2025-05-30 00:55:01.015378 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-30 00:55:01.015387 | orchestrator | Friday 30 May 2025 00:50:50 +0000 (0:00:01.303) 0:03:12.112 ************ 2025-05-30 00:55:01.015395 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.015404 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.015413 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.015421 | orchestrator | 2025-05-30 00:55:01.015430 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-30 00:55:01.015439 | orchestrator | Friday 30 May 2025 00:50:52 +0000 (0:00:01.437) 0:03:13.549 ************ 2025-05-30 00:55:01.015448 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.015456 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.015465 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.015473 | orchestrator | 2025-05-30 00:55:01.015482 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-30 00:55:01.015491 | orchestrator | Friday 30 May 2025 00:50:54 +0000 (0:00:02.537) 0:03:16.086 ************ 2025-05-30 00:55:01.015500 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.015508 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.015517 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.015526 | orchestrator | 2025-05-30 00:55:01.015535 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-30 00:55:01.015543 | orchestrator | Friday 30 May 2025 00:50:55 +0000 (0:00:00.340) 0:03:16.427 ************ 2025-05-30 00:55:01.015552 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.015561 | orchestrator | 2025-05-30 00:55:01.015570 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-30 00:55:01.015578 | orchestrator | Friday 30 May 2025 00:50:56 +0000 (0:00:01.288) 0:03:17.715 ************ 2025-05-30 00:55:01.015588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 00:55:01.015597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.015625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 00:55:01.015641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.015651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 00:55:01.015661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.015670 | orchestrator | 2025-05-30 00:55:01.015679 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-30 00:55:01.015687 | orchestrator | Friday 30 May 2025 00:51:00 +0000 (0:00:04.572) 0:03:22.287 ************ 2025-05-30 00:55:01.015702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 00:55:01.015720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.015729 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.015739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 00:55:01.015748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.015757 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.015766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 00:55:01.015780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.015803 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.015813 | orchestrator | 2025-05-30 00:55:01.015822 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-30 00:55:01.015830 | orchestrator | Friday 30 May 2025 00:51:01 +0000 (0:00:00.875) 0:03:23.163 ************ 2025-05-30 00:55:01.015851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-30 00:55:01.015872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-30 00:55:01.015881 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.015890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-30 00:55:01.015899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-30 00:55:01.015908 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.015916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-30 00:55:01.015925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-30 00:55:01.015934 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.015943 | orchestrator | 2025-05-30 00:55:01.015952 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-30 00:55:01.015960 | orchestrator | Friday 30 May 2025 00:51:03 +0000 (0:00:01.699) 0:03:24.863 ************ 2025-05-30 00:55:01.015969 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.015978 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.015986 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.015995 | orchestrator | 2025-05-30 00:55:01.016004 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-30 00:55:01.016012 | orchestrator | Friday 30 May 2025 00:51:04 +0000 (0:00:01.339) 0:03:26.202 ************ 2025-05-30 00:55:01.016021 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.016030 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.016038 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.016047 | orchestrator | 2025-05-30 00:55:01.016056 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-30 00:55:01.016064 | orchestrator | Friday 30 May 2025 00:51:07 +0000 (0:00:02.387) 0:03:28.590 ************ 2025-05-30 00:55:01.016073 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.016082 | orchestrator | 2025-05-30 00:55:01.016090 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-30 00:55:01.016099 | orchestrator | Friday 30 May 2025 00:51:08 +0000 (0:00:01.142) 0:03:29.733 ************ 2025-05-30 00:55:01.016108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-30 00:55:01.016124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-30 00:55:01.016175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-30 00:55:01.016189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016257 | orchestrator | 2025-05-30 00:55:01.016266 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-30 00:55:01.016275 | orchestrator | Friday 30 May 2025 00:51:12 +0000 (0:00:04.511) 0:03:34.244 ************ 2025-05-30 00:55:01.016285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-30 00:55:01.016298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016340 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.016349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-30 00:55:01.016372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016406 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.016418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-30 00:55:01.016428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.016470 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.016479 | orchestrator | 2025-05-30 00:55:01.016488 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-30 00:55:01.016497 | orchestrator | Friday 30 May 2025 00:51:13 +0000 (0:00:01.053) 0:03:35.298 ************ 2025-05-30 00:55:01.016506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-30 00:55:01.016515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-30 00:55:01.016524 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.016532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-30 00:55:01.016541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-30 00:55:01.016550 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.016563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-30 00:55:01.016573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-30 00:55:01.016582 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.016590 | orchestrator | 2025-05-30 00:55:01.016612 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-30 00:55:01.016622 | orchestrator | Friday 30 May 2025 00:51:15 +0000 (0:00:01.257) 0:03:36.555 ************ 2025-05-30 00:55:01.016630 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.016639 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.016648 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.016657 | orchestrator | 2025-05-30 00:55:01.016665 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-30 00:55:01.016674 | orchestrator | Friday 30 May 2025 00:51:16 +0000 (0:00:01.453) 0:03:38.008 ************ 2025-05-30 00:55:01.016683 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.016691 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.016700 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.016709 | orchestrator | 2025-05-30 00:55:01.016717 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-30 00:55:01.016726 | orchestrator | Friday 30 May 2025 00:51:19 +0000 (0:00:02.564) 0:03:40.572 ************ 2025-05-30 00:55:01.016735 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.016768 | orchestrator | 2025-05-30 00:55:01.016785 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-30 00:55:01.016800 | orchestrator | Friday 30 May 2025 00:51:20 +0000 (0:00:01.566) 0:03:42.139 ************ 2025-05-30 00:55:01.016814 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 00:55:01.016831 | orchestrator | 2025-05-30 00:55:01.016847 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-30 00:55:01.016897 | orchestrator | Friday 30 May 2025 00:51:23 +0000 (0:00:03.119) 0:03:45.259 ************ 2025-05-30 00:55:01.016909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-30 00:55:01.016920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-30 00:55:01.016929 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.016952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-30 00:55:01.016983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-30 00:55:01.016993 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.017941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-30 00:55:01.017987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-30 00:55:01.018045 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.018058 | orchestrator | 2025-05-30 00:55:01.018068 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-30 00:55:01.018077 | orchestrator | Friday 30 May 2025 00:51:27 +0000 (0:00:03.217) 0:03:48.476 ************ 2025-05-30 00:55:01.018086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-30 00:55:01.018097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-30 00:55:01.018106 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.018130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-30 00:55:01.018147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-30 00:55:01.018156 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.018165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-30 00:55:01.018180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-30 00:55:01.018205 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.018215 | orchestrator | 2025-05-30 00:55:01.018228 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-30 00:55:01.018237 | orchestrator | Friday 30 May 2025 00:51:29 +0000 (0:00:02.581) 0:03:51.058 ************ 2025-05-30 00:55:01.018246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-30 00:55:01.018255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-30 00:55:01.018264 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.018273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-30 00:55:01.018283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-30 00:55:01.018292 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.018301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-30 00:55:01.018314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-30 00:55:01.018328 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.018337 | orchestrator | 2025-05-30 00:55:01.018346 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-30 00:55:01.018355 | orchestrator | Friday 30 May 2025 00:51:32 +0000 (0:00:02.877) 0:03:53.935 ************ 2025-05-30 00:55:01.018364 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.018379 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.018388 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.018397 | orchestrator | 2025-05-30 00:55:01.018405 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-30 00:55:01.018414 | orchestrator | Friday 30 May 2025 00:51:34 +0000 (0:00:01.888) 0:03:55.824 ************ 2025-05-30 00:55:01.018423 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.018431 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.018440 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.018449 | orchestrator | 2025-05-30 00:55:01.018458 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-30 00:55:01.018466 | orchestrator | Friday 30 May 2025 00:51:36 +0000 (0:00:01.601) 0:03:57.426 ************ 2025-05-30 00:55:01.018475 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.018484 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.018492 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.018501 | orchestrator | 2025-05-30 00:55:01.018510 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-30 00:55:01.018518 | orchestrator | Friday 30 May 2025 00:51:36 +0000 (0:00:00.520) 0:03:57.946 ************ 2025-05-30 00:55:01.018527 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.018536 | orchestrator | 2025-05-30 00:55:01.018545 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-30 00:55:01.018553 | orchestrator | Friday 30 May 2025 00:51:38 +0000 (0:00:01.455) 0:03:59.401 ************ 2025-05-30 00:55:01.018564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-30 00:55:01.018581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-30 00:55:01.018597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-30 00:55:01.018639 | orchestrator | 2025-05-30 00:55:01.018653 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-30 00:55:01.018668 | orchestrator | Friday 30 May 2025 00:51:39 +0000 (0:00:01.887) 0:04:01.289 ************ 2025-05-30 00:55:01.018698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-30 00:55:01.018715 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.018731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-30 00:55:01.018748 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.018758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-30 00:55:01.018767 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.018775 | orchestrator | 2025-05-30 00:55:01.018784 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-30 00:55:01.018793 | orchestrator | Friday 30 May 2025 00:51:40 +0000 (0:00:00.374) 0:04:01.663 ************ 2025-05-30 00:55:01.018802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-30 00:55:01.018811 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.018820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-30 00:55:01.018848 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.018878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-30 00:55:01.018889 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.018897 | orchestrator | 2025-05-30 00:55:01.018906 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-30 00:55:01.018915 | orchestrator | Friday 30 May 2025 00:51:41 +0000 (0:00:00.983) 0:04:02.647 ************ 2025-05-30 00:55:01.018924 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.018932 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.018941 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.018950 | orchestrator | 2025-05-30 00:55:01.018958 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-30 00:55:01.018967 | orchestrator | Friday 30 May 2025 00:51:42 +0000 (0:00:00.867) 0:04:03.514 ************ 2025-05-30 00:55:01.018976 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.018984 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.018993 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.019002 | orchestrator | 2025-05-30 00:55:01.019010 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-30 00:55:01.019019 | orchestrator | Friday 30 May 2025 00:51:43 +0000 (0:00:01.580) 0:04:05.095 ************ 2025-05-30 00:55:01.019028 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.019042 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.019052 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.019060 | orchestrator | 2025-05-30 00:55:01.019069 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-30 00:55:01.019078 | orchestrator | Friday 30 May 2025 00:51:44 +0000 (0:00:00.303) 0:04:05.399 ************ 2025-05-30 00:55:01.019087 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.019096 | orchestrator | 2025-05-30 00:55:01.019104 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-30 00:55:01.019128 | orchestrator | Friday 30 May 2025 00:51:45 +0000 (0:00:01.515) 0:04:06.914 ************ 2025-05-30 00:55:01.019138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 00:55:01.019148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 00:55:01.019201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 00:55:01.019226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.019237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.019260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 00:55:01.019316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 00:55:01.019363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.019374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.019406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.019416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 00:55:01.019439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.019449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 00:55:01.019459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 00:55:01.019510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 00:55:01.019546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.019593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.019611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 00:55:01.019661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 00:55:01.019671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 00:55:01.019690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.019708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.019749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 00:55:01.019780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.019818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.019905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 00:55:01.019939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 00:55:01.019948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.019957 | orchestrator | 2025-05-30 00:55:01.019966 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-30 00:55:01.019975 | orchestrator | Friday 30 May 2025 00:51:50 +0000 (0:00:04.815) 0:04:11.729 ************ 2025-05-30 00:55:01.019990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 00:55:01.020015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 00:55:01.020031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 00:55:01.020122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 00:55:01.020131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.020338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.020347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.020357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.020366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 00:55:01.020461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 00:55:01.020470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.020495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.020503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.020591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.020614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 00:55:01.020623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 00:55:01.020685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 00:55:01.020716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 00:55:01.020735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020744 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.020759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 00:55:01.020892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 00:55:01.020925 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.020933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.020942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.020956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.021015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.021031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 00:55:01.021041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.021049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.021057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 00:55:01.021066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.021129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 00:55:01.021142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 00:55:01.021151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.021159 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.021167 | orchestrator | 2025-05-30 00:55:01.021176 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-30 00:55:01.021184 | orchestrator | Friday 30 May 2025 00:51:52 +0000 (0:00:01.774) 0:04:13.504 ************ 2025-05-30 00:55:01.021192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-30 00:55:01.021201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-30 00:55:01.021209 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.021217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-30 00:55:01.021225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-30 00:55:01.021239 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.021247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-30 00:55:01.021255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-30 00:55:01.021263 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.021271 | orchestrator | 2025-05-30 00:55:01.021279 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-30 00:55:01.021287 | orchestrator | Friday 30 May 2025 00:51:54 +0000 (0:00:01.996) 0:04:15.500 ************ 2025-05-30 00:55:01.021295 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.021302 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.021310 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.021318 | orchestrator | 2025-05-30 00:55:01.021326 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-30 00:55:01.021334 | orchestrator | Friday 30 May 2025 00:51:55 +0000 (0:00:01.405) 0:04:16.905 ************ 2025-05-30 00:55:01.021342 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.021350 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.021358 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.021366 | orchestrator | 2025-05-30 00:55:01.021374 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-30 00:55:01.021382 | orchestrator | Friday 30 May 2025 00:51:57 +0000 (0:00:02.312) 0:04:19.217 ************ 2025-05-30 00:55:01.021412 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.021422 | orchestrator | 2025-05-30 00:55:01.021430 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-30 00:55:01.021438 | orchestrator | Friday 30 May 2025 00:51:59 +0000 (0:00:01.539) 0:04:20.756 ************ 2025-05-30 00:55:01.021450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.021459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.021472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.021480 | orchestrator | 2025-05-30 00:55:01.021488 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-30 00:55:01.021496 | orchestrator | Friday 30 May 2025 00:52:03 +0000 (0:00:03.614) 0:04:24.371 ************ 2025-05-30 00:55:01.021504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.021513 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.021558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.021569 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.021577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.021591 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.021599 | orchestrator | 2025-05-30 00:55:01.021607 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-30 00:55:01.021615 | orchestrator | Friday 30 May 2025 00:52:03 +0000 (0:00:00.731) 0:04:25.102 ************ 2025-05-30 00:55:01.021623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-30 00:55:01.021632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-30 00:55:01.021640 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.021648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-30 00:55:01.021656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-30 00:55:01.021665 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.021673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-30 00:55:01.021681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-30 00:55:01.021689 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.021697 | orchestrator | 2025-05-30 00:55:01.021705 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-30 00:55:01.021713 | orchestrator | Friday 30 May 2025 00:52:04 +0000 (0:00:00.945) 0:04:26.048 ************ 2025-05-30 00:55:01.021721 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.021729 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.021736 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.021744 | orchestrator | 2025-05-30 00:55:01.021752 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-30 00:55:01.021762 | orchestrator | Friday 30 May 2025 00:52:06 +0000 (0:00:01.351) 0:04:27.399 ************ 2025-05-30 00:55:01.021772 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.021781 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.021790 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.021800 | orchestrator | 2025-05-30 00:55:01.021829 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-30 00:55:01.021840 | orchestrator | Friday 30 May 2025 00:52:08 +0000 (0:00:02.304) 0:04:29.703 ************ 2025-05-30 00:55:01.021849 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.021872 | orchestrator | 2025-05-30 00:55:01.021882 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-30 00:55:01.021891 | orchestrator | Friday 30 May 2025 00:52:09 +0000 (0:00:01.601) 0:04:31.305 ************ 2025-05-30 00:55:01.021908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.021935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.021946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.021960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.022050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.022074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.022098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.022114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.022128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.022143 | orchestrator | 2025-05-30 00:55:01.022158 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-30 00:55:01.022173 | orchestrator | Friday 30 May 2025 00:52:15 +0000 (0:00:05.491) 0:04:36.796 ************ 2025-05-30 00:55:01.022229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.022247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.022256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.022264 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.022273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.022282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.022315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.022345 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.022354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.022363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.022372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.022380 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.022388 | orchestrator | 2025-05-30 00:55:01.022396 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-30 00:55:01.022404 | orchestrator | Friday 30 May 2025 00:52:16 +0000 (0:00:00.970) 0:04:37.767 ************ 2025-05-30 00:55:01.022412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-30 00:55:01.022420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-30 00:55:01.022429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-30 00:55:01.022459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-30 00:55:01.022484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-30 00:55:01.022497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-30 00:55:01.022505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-30 00:55:01.022513 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.022522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-30 00:55:01.022530 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.022538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-30 00:55:01.022546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-30 00:55:01.022554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-30 00:55:01.022562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-30 00:55:01.022570 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.022578 | orchestrator | 2025-05-30 00:55:01.022586 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-30 00:55:01.022594 | orchestrator | Friday 30 May 2025 00:52:17 +0000 (0:00:01.347) 0:04:39.115 ************ 2025-05-30 00:55:01.022602 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.022610 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.022618 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.022626 | orchestrator | 2025-05-30 00:55:01.022634 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-30 00:55:01.022642 | orchestrator | Friday 30 May 2025 00:52:19 +0000 (0:00:01.337) 0:04:40.452 ************ 2025-05-30 00:55:01.022650 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.022658 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.022666 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.022673 | orchestrator | 2025-05-30 00:55:01.022681 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-30 00:55:01.022689 | orchestrator | Friday 30 May 2025 00:52:21 +0000 (0:00:02.604) 0:04:43.057 ************ 2025-05-30 00:55:01.022697 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.022705 | orchestrator | 2025-05-30 00:55:01.022713 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-30 00:55:01.022721 | orchestrator | Friday 30 May 2025 00:52:23 +0000 (0:00:01.469) 0:04:44.526 ************ 2025-05-30 00:55:01.022729 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item=nova-novncproxy) 2025-05-30 00:55:01.022737 | orchestrator | 2025-05-30 00:55:01.022758 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-30 00:55:01.022767 | orchestrator | Friday 30 May 2025 00:52:24 +0000 (0:00:01.532) 0:04:46.058 ************ 2025-05-30 00:55:01.022775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-30 00:55:01.022805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-30 00:55:01.022829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-30 00:55:01.022838 | orchestrator | 2025-05-30 00:55:01.022846 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-30 00:55:01.022854 | orchestrator | Friday 30 May 2025 00:52:29 +0000 (0:00:05.248) 0:04:51.307 ************ 2025-05-30 00:55:01.022905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-30 00:55:01.022914 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.022922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-30 00:55:01.022930 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.022939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-30 00:55:01.022947 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.022955 | orchestrator | 2025-05-30 00:55:01.022962 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-30 00:55:01.022970 | orchestrator | Friday 30 May 2025 00:52:31 +0000 (0:00:01.455) 0:04:52.762 ************ 2025-05-30 00:55:01.022985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-30 00:55:01.022994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-30 00:55:01.023002 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.023010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-30 00:55:01.023023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-30 00:55:01.023032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-30 00:55:01.023040 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.023072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-30 00:55:01.023082 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.023089 | orchestrator | 2025-05-30 00:55:01.023096 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-30 00:55:01.023103 | orchestrator | Friday 30 May 2025 00:52:33 +0000 (0:00:02.499) 0:04:55.262 ************ 2025-05-30 00:55:01.023109 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.023116 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.023123 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.023129 | orchestrator | 2025-05-30 00:55:01.023139 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-30 00:55:01.023146 | orchestrator | Friday 30 May 2025 00:52:36 +0000 (0:00:03.080) 0:04:58.343 ************ 2025-05-30 00:55:01.023153 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.023160 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.023166 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.023173 | orchestrator | 2025-05-30 00:55:01.023180 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-30 00:55:01.023186 | orchestrator | Friday 30 May 2025 00:52:40 +0000 (0:00:03.816) 0:05:02.159 ************ 2025-05-30 00:55:01.023193 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-30 00:55:01.023200 | orchestrator | 2025-05-30 00:55:01.023207 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-30 00:55:01.023214 | orchestrator | Friday 30 May 2025 00:52:42 +0000 (0:00:01.576) 0:05:03.736 ************ 2025-05-30 00:55:01.023221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-30 00:55:01.023232 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.023239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-30 00:55:01.023246 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.023253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-30 00:55:01.023260 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.023267 | orchestrator | 2025-05-30 00:55:01.023273 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-30 00:55:01.023280 | orchestrator | Friday 30 May 2025 00:52:44 +0000 (0:00:02.212) 0:05:05.949 ************ 2025-05-30 00:55:01.023287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-30 00:55:01.023294 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.023318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-30 00:55:01.023326 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.023348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-30 00:55:01.023356 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.023363 | orchestrator | 2025-05-30 00:55:01.023370 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-30 00:55:01.023376 | orchestrator | Friday 30 May 2025 00:52:46 +0000 (0:00:02.000) 0:05:07.949 ************ 2025-05-30 00:55:01.023383 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.023390 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.023396 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.023403 | orchestrator | 2025-05-30 00:55:01.023410 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-30 00:55:01.023442 | orchestrator | Friday 30 May 2025 00:52:48 +0000 (0:00:01.840) 0:05:09.790 ************ 2025-05-30 00:55:01.023449 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.023456 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.023463 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.023469 | orchestrator | 2025-05-30 00:55:01.023476 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-30 00:55:01.023483 | orchestrator | Friday 30 May 2025 00:52:51 +0000 (0:00:03.093) 0:05:12.884 ************ 2025-05-30 00:55:01.023490 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.023496 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.023503 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.023509 | orchestrator | 2025-05-30 00:55:01.023516 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-30 00:55:01.023523 | orchestrator | Friday 30 May 2025 00:52:55 +0000 (0:00:03.615) 0:05:16.499 ************ 2025-05-30 00:55:01.023530 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-30 00:55:01.023536 | orchestrator | 2025-05-30 00:55:01.023543 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-30 00:55:01.023550 | orchestrator | Friday 30 May 2025 00:52:56 +0000 (0:00:01.264) 0:05:17.763 ************ 2025-05-30 00:55:01.023557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-30 00:55:01.023564 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.023571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-30 00:55:01.023578 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.023585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-30 00:55:01.023592 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.023599 | orchestrator | 2025-05-30 00:55:01.023605 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-30 00:55:01.023612 | orchestrator | Friday 30 May 2025 00:52:58 +0000 (0:00:01.687) 0:05:19.450 ************ 2025-05-30 00:55:01.023638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-30 00:55:01.023659 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.023670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-30 00:55:01.023677 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.023684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-30 00:55:01.023691 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.023697 | orchestrator | 2025-05-30 00:55:01.023704 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-30 00:55:01.023711 | orchestrator | Friday 30 May 2025 00:52:59 +0000 (0:00:01.381) 0:05:20.832 ************ 2025-05-30 00:55:01.023718 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.023724 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.023731 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.023737 | orchestrator | 2025-05-30 00:55:01.023744 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-30 00:55:01.023751 | orchestrator | Friday 30 May 2025 00:53:00 +0000 (0:00:01.528) 0:05:22.360 ************ 2025-05-30 00:55:01.023757 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.023764 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.023771 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.023777 | orchestrator | 2025-05-30 00:55:01.023784 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-30 00:55:01.023791 | orchestrator | Friday 30 May 2025 00:53:03 +0000 (0:00:02.938) 0:05:25.298 ************ 2025-05-30 00:55:01.023798 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.023804 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.023811 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.023817 | orchestrator | 2025-05-30 00:55:01.023824 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-30 00:55:01.023831 | orchestrator | Friday 30 May 2025 00:53:08 +0000 (0:00:04.108) 0:05:29.407 ************ 2025-05-30 00:55:01.023838 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.023844 | orchestrator | 2025-05-30 00:55:01.023851 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-30 00:55:01.023869 | orchestrator | Friday 30 May 2025 00:53:09 +0000 (0:00:01.735) 0:05:31.143 ************ 2025-05-30 00:55:01.023876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.023914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-30 00:55:01.023935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.023943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.023950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.023958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.023965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-30 00:55:01.023976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.024004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.024012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.024019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.024026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-30 00:55:01.024034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.024054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.024079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.024086 | orchestrator | 2025-05-30 00:55:01.024093 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-30 00:55:01.024100 | orchestrator | Friday 30 May 2025 00:53:14 +0000 (0:00:05.108) 0:05:36.252 ************ 2025-05-30 00:55:01.024162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.024177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-30 00:55:01.024184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.024191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.024203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.024210 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.024250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.024260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-30 00:55:01.024267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.024274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.024281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.024294 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.024319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.024330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-30 00:55:01.024338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.024345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-30 00:55:01.024352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-30 00:55:01.024359 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.024365 | orchestrator | 2025-05-30 00:55:01.024377 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-30 00:55:01.024389 | orchestrator | Friday 30 May 2025 00:53:16 +0000 (0:00:01.300) 0:05:37.552 ************ 2025-05-30 00:55:01.024401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-30 00:55:01.024413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-30 00:55:01.024424 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.024436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-30 00:55:01.024448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-30 00:55:01.024461 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.024473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-30 00:55:01.024484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-30 00:55:01.024491 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.024498 | orchestrator | 2025-05-30 00:55:01.024526 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-30 00:55:01.024534 | orchestrator | Friday 30 May 2025 00:53:17 +0000 (0:00:01.189) 0:05:38.742 ************ 2025-05-30 00:55:01.024541 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.024547 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.024554 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.024560 | orchestrator | 2025-05-30 00:55:01.024567 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-30 00:55:01.024585 | orchestrator | Friday 30 May 2025 00:53:18 +0000 (0:00:01.576) 0:05:40.319 ************ 2025-05-30 00:55:01.024597 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.024604 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.024610 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.024617 | orchestrator | 2025-05-30 00:55:01.024624 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-30 00:55:01.024630 | orchestrator | Friday 30 May 2025 00:53:21 +0000 (0:00:02.610) 0:05:42.930 ************ 2025-05-30 00:55:01.024637 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.024643 | orchestrator | 2025-05-30 00:55:01.024650 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-30 00:55:01.024656 | orchestrator | Friday 30 May 2025 00:53:23 +0000 (0:00:01.530) 0:05:44.460 ************ 2025-05-30 00:55:01.024663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:55:01.024677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:55:01.024686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:55:01.024748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:55:01.024766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:55:01.024787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:55:01.024799 | orchestrator | 2025-05-30 00:55:01.024811 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-30 00:55:01.024824 | orchestrator | Friday 30 May 2025 00:53:29 +0000 (0:00:06.719) 0:05:51.180 ************ 2025-05-30 00:55:01.024835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-30 00:55:01.024900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-30 00:55:01.024916 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.024927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-30 00:55:01.024955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-30 00:55:01.024968 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.024979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-30 00:55:01.025022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-30 00:55:01.025041 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.025053 | orchestrator | 2025-05-30 00:55:01.025064 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-30 00:55:01.025075 | orchestrator | Friday 30 May 2025 00:53:30 +0000 (0:00:00.997) 0:05:52.177 ************ 2025-05-30 00:55:01.025087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-30 00:55:01.025099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-30 00:55:01.025117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-30 00:55:01.025129 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.025140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-30 00:55:01.025152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-30 00:55:01.025164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-30 00:55:01.025175 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.025187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-30 00:55:01.025198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-30 00:55:01.025210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-30 00:55:01.025221 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.025232 | orchestrator | 2025-05-30 00:55:01.025244 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-30 00:55:01.025255 | orchestrator | Friday 30 May 2025 00:53:32 +0000 (0:00:01.493) 0:05:53.671 ************ 2025-05-30 00:55:01.025265 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.025276 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.025287 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.025298 | orchestrator | 2025-05-30 00:55:01.025308 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-30 00:55:01.025319 | orchestrator | Friday 30 May 2025 00:53:33 +0000 (0:00:00.704) 0:05:54.376 ************ 2025-05-30 00:55:01.025331 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.025342 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.025353 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.025364 | orchestrator | 2025-05-30 00:55:01.025375 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-30 00:55:01.025386 | orchestrator | Friday 30 May 2025 00:53:34 +0000 (0:00:01.841) 0:05:56.217 ************ 2025-05-30 00:55:01.025397 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.025408 | orchestrator | 2025-05-30 00:55:01.025418 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-30 00:55:01.025429 | orchestrator | Friday 30 May 2025 00:53:36 +0000 (0:00:01.832) 0:05:58.050 ************ 2025-05-30 00:55:01.025511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-30 00:55:01.025550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 00:55:01.025564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.025576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.025589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 00:55:01.025600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-30 00:55:01.025613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 00:55:01.025660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.025686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.025698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 00:55:01.025710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-30 00:55:01.025721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 00:55:01.025732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.025743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.025794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 00:55:01.025815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-30 00:55:01.025829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 00:55:01.025840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.025853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.025915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 00:55:01.025990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-30 00:55:01.026071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-30 00:55:01.026084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 00:55:01.026096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 00:55:01.026141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 00:55:01.026181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 00:55:01.026188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026207 | orchestrator | 2025-05-30 00:55:01.026219 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-30 00:55:01.026226 | orchestrator | Friday 30 May 2025 00:53:41 +0000 (0:00:05.079) 0:06:03.130 ************ 2025-05-30 00:55:01.026248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 00:55:01.026256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 00:55:01.026263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 00:55:01.026289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 00:55:01.026304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 00:55:01.026312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 00:55:01.026339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 00:55:01.026352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 00:55:01.026362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026389 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.026396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 00:55:01.026409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 00:55:01.026420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 00:55:01.026430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 00:55:01.026460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026466 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.026473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 00:55:01.026483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 00:55:01.026490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 00:55:01.026516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 00:55:01.026524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 00:55:01.026541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 00:55:01.026586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 00:55:01.026597 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.026608 | orchestrator | 2025-05-30 00:55:01.026619 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-30 00:55:01.026629 | orchestrator | Friday 30 May 2025 00:53:43 +0000 (0:00:01.649) 0:06:04.779 ************ 2025-05-30 00:55:01.026640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-30 00:55:01.026651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-30 00:55:01.026663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-30 00:55:01.026674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-30 00:55:01.026685 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.026696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-30 00:55:01.026714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-30 00:55:01.026725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-30 00:55:01.026736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-30 00:55:01.026747 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.026758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-30 00:55:01.026769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-30 00:55:01.026780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-30 00:55:01.026790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-30 00:55:01.026801 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.026810 | orchestrator | 2025-05-30 00:55:01.026820 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-30 00:55:01.026832 | orchestrator | Friday 30 May 2025 00:53:44 +0000 (0:00:01.402) 0:06:06.182 ************ 2025-05-30 00:55:01.026850 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.026877 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.026887 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.026898 | orchestrator | 2025-05-30 00:55:01.026908 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-30 00:55:01.026919 | orchestrator | Friday 30 May 2025 00:53:45 +0000 (0:00:01.096) 0:06:07.279 ************ 2025-05-30 00:55:01.026929 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.026939 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.026949 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.026959 | orchestrator | 2025-05-30 00:55:01.026975 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-30 00:55:01.026986 | orchestrator | Friday 30 May 2025 00:53:47 +0000 (0:00:01.679) 0:06:08.958 ************ 2025-05-30 00:55:01.026997 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.027008 | orchestrator | 2025-05-30 00:55:01.027017 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-30 00:55:01.027027 | orchestrator | Friday 30 May 2025 00:53:49 +0000 (0:00:01.589) 0:06:10.547 ************ 2025-05-30 00:55:01.027038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-30 00:55:01.027077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-30 00:55:01.027091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-30 00:55:01.027102 | orchestrator | 2025-05-30 00:55:01.027113 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-30 00:55:01.027124 | orchestrator | Friday 30 May 2025 00:53:52 +0000 (0:00:02.986) 0:06:13.533 ************ 2025-05-30 00:55:01.027175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-30 00:55:01.027199 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.027211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-30 00:55:01.027221 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.027228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-30 00:55:01.027235 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.027244 | orchestrator | 2025-05-30 00:55:01.027254 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-30 00:55:01.027265 | orchestrator | Friday 30 May 2025 00:53:52 +0000 (0:00:00.691) 0:06:14.225 ************ 2025-05-30 00:55:01.027276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-30 00:55:01.027286 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.027297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-30 00:55:01.027307 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.027317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-30 00:55:01.027328 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.027337 | orchestrator | 2025-05-30 00:55:01.027347 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-30 00:55:01.027358 | orchestrator | Friday 30 May 2025 00:53:53 +0000 (0:00:00.864) 0:06:15.089 ************ 2025-05-30 00:55:01.027369 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.027379 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.027390 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.027400 | orchestrator | 2025-05-30 00:55:01.027410 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-30 00:55:01.027427 | orchestrator | Friday 30 May 2025 00:53:54 +0000 (0:00:00.778) 0:06:15.868 ************ 2025-05-30 00:55:01.027437 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.027456 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.027467 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.027477 | orchestrator | 2025-05-30 00:55:01.027488 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-30 00:55:01.027498 | orchestrator | Friday 30 May 2025 00:53:56 +0000 (0:00:01.944) 0:06:17.812 ************ 2025-05-30 00:55:01.027514 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:55:01.027524 | orchestrator | 2025-05-30 00:55:01.027536 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-30 00:55:01.027546 | orchestrator | Friday 30 May 2025 00:53:58 +0000 (0:00:01.922) 0:06:19.735 ************ 2025-05-30 00:55:01.027558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.027571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.027583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.027599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.027643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.027658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-30 00:55:01.027669 | orchestrator | 2025-05-30 00:55:01.027680 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-30 00:55:01.027689 | orchestrator | Friday 30 May 2025 00:54:05 +0000 (0:00:07.263) 0:06:26.998 ************ 2025-05-30 00:55:01.027701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.027718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.027736 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.027767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.027782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.027794 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.027806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.027817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-30 00:55:01.027847 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.027875 | orchestrator | 2025-05-30 00:55:01.027887 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-30 00:55:01.027904 | orchestrator | Friday 30 May 2025 00:54:06 +0000 (0:00:01.008) 0:06:28.006 ************ 2025-05-30 00:55:01.027915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-30 00:55:01.027930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-30 00:55:01.027940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-30 00:55:01.027947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-30 00:55:01.027954 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.027960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-30 00:55:01.027967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-30 00:55:01.027973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-30 00:55:01.027980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-30 00:55:01.027986 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.027992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-30 00:55:01.027999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-30 00:55:01.028005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-30 00:55:01.028012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-30 00:55:01.028018 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.028024 | orchestrator | 2025-05-30 00:55:01.028035 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-30 00:55:01.028041 | orchestrator | Friday 30 May 2025 00:54:08 +0000 (0:00:02.178) 0:06:30.185 ************ 2025-05-30 00:55:01.028047 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.028054 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.028060 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.028066 | orchestrator | 2025-05-30 00:55:01.028072 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-30 00:55:01.028078 | orchestrator | Friday 30 May 2025 00:54:10 +0000 (0:00:01.713) 0:06:31.898 ************ 2025-05-30 00:55:01.028085 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.028091 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.028097 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.028103 | orchestrator | 2025-05-30 00:55:01.028109 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-30 00:55:01.028116 | orchestrator | Friday 30 May 2025 00:54:13 +0000 (0:00:02.724) 0:06:34.623 ************ 2025-05-30 00:55:01.028122 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.028128 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.028134 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.028140 | orchestrator | 2025-05-30 00:55:01.028146 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-30 00:55:01.028153 | orchestrator | Friday 30 May 2025 00:54:13 +0000 (0:00:00.337) 0:06:34.961 ************ 2025-05-30 00:55:01.028159 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.028165 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.028171 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.028177 | orchestrator | 2025-05-30 00:55:01.028184 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-30 00:55:01.028193 | orchestrator | Friday 30 May 2025 00:54:14 +0000 (0:00:00.657) 0:06:35.618 ************ 2025-05-30 00:55:01.028200 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.028206 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.028212 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.028218 | orchestrator | 2025-05-30 00:55:01.028225 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-30 00:55:01.028231 | orchestrator | Friday 30 May 2025 00:54:14 +0000 (0:00:00.619) 0:06:36.237 ************ 2025-05-30 00:55:01.028239 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.028250 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.028264 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.028275 | orchestrator | 2025-05-30 00:55:01.028285 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-30 00:55:01.028295 | orchestrator | Friday 30 May 2025 00:54:15 +0000 (0:00:00.324) 0:06:36.561 ************ 2025-05-30 00:55:01.028305 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.028315 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.028324 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.028333 | orchestrator | 2025-05-30 00:55:01.028343 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-30 00:55:01.028352 | orchestrator | Friday 30 May 2025 00:54:15 +0000 (0:00:00.596) 0:06:37.158 ************ 2025-05-30 00:55:01.028362 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.028373 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.028383 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.028394 | orchestrator | 2025-05-30 00:55:01.028405 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-30 00:55:01.028413 | orchestrator | Friday 30 May 2025 00:54:16 +0000 (0:00:01.028) 0:06:38.186 ************ 2025-05-30 00:55:01.028420 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.028426 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.028432 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.028438 | orchestrator | 2025-05-30 00:55:01.028444 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-30 00:55:01.028456 | orchestrator | Friday 30 May 2025 00:54:17 +0000 (0:00:00.665) 0:06:38.852 ************ 2025-05-30 00:55:01.028462 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.028468 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.028474 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.028480 | orchestrator | 2025-05-30 00:55:01.028486 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-30 00:55:01.028493 | orchestrator | Friday 30 May 2025 00:54:18 +0000 (0:00:00.610) 0:06:39.462 ************ 2025-05-30 00:55:01.028499 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.028505 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.028511 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.028517 | orchestrator | 2025-05-30 00:55:01.028523 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-30 00:55:01.028530 | orchestrator | Friday 30 May 2025 00:54:19 +0000 (0:00:01.250) 0:06:40.713 ************ 2025-05-30 00:55:01.028536 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.028542 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.028548 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.028554 | orchestrator | 2025-05-30 00:55:01.028561 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-30 00:55:01.028567 | orchestrator | Friday 30 May 2025 00:54:20 +0000 (0:00:01.365) 0:06:42.078 ************ 2025-05-30 00:55:01.028573 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.028579 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.028585 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.028592 | orchestrator | 2025-05-30 00:55:01.028598 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-30 00:55:01.028604 | orchestrator | Friday 30 May 2025 00:54:21 +0000 (0:00:01.009) 0:06:43.088 ************ 2025-05-30 00:55:01.028610 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.028616 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.028622 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.028629 | orchestrator | 2025-05-30 00:55:01.028635 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-30 00:55:01.028641 | orchestrator | Friday 30 May 2025 00:54:31 +0000 (0:00:09.694) 0:06:52.783 ************ 2025-05-30 00:55:01.028647 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.028653 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.028659 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.028665 | orchestrator | 2025-05-30 00:55:01.028672 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-30 00:55:01.028678 | orchestrator | Friday 30 May 2025 00:54:32 +0000 (0:00:01.310) 0:06:54.093 ************ 2025-05-30 00:55:01.028684 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.028690 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.028696 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.028702 | orchestrator | 2025-05-30 00:55:01.028709 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-30 00:55:01.028715 | orchestrator | Friday 30 May 2025 00:54:43 +0000 (0:00:10.361) 0:07:04.455 ************ 2025-05-30 00:55:01.028721 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.028727 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.028734 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.028740 | orchestrator | 2025-05-30 00:55:01.028748 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-30 00:55:01.028758 | orchestrator | Friday 30 May 2025 00:54:44 +0000 (0:00:01.738) 0:07:06.193 ************ 2025-05-30 00:55:01.028769 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:55:01.028779 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:55:01.028789 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:55:01.028800 | orchestrator | 2025-05-30 00:55:01.028810 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-30 00:55:01.028821 | orchestrator | Friday 30 May 2025 00:54:49 +0000 (0:00:04.822) 0:07:11.016 ************ 2025-05-30 00:55:01.028838 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.028849 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.028877 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.028888 | orchestrator | 2025-05-30 00:55:01.028898 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-30 00:55:01.028909 | orchestrator | Friday 30 May 2025 00:54:50 +0000 (0:00:00.604) 0:07:11.621 ************ 2025-05-30 00:55:01.028920 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.028937 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.028948 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.028959 | orchestrator | 2025-05-30 00:55:01.028968 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-30 00:55:01.028975 | orchestrator | Friday 30 May 2025 00:54:50 +0000 (0:00:00.328) 0:07:11.949 ************ 2025-05-30 00:55:01.028981 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.028987 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.028993 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.028999 | orchestrator | 2025-05-30 00:55:01.029010 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-30 00:55:01.029016 | orchestrator | Friday 30 May 2025 00:54:51 +0000 (0:00:00.643) 0:07:12.593 ************ 2025-05-30 00:55:01.029022 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.029028 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.029035 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.029041 | orchestrator | 2025-05-30 00:55:01.029047 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-30 00:55:01.029053 | orchestrator | Friday 30 May 2025 00:54:51 +0000 (0:00:00.606) 0:07:13.200 ************ 2025-05-30 00:55:01.029059 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.029065 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.029071 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.029078 | orchestrator | 2025-05-30 00:55:01.029084 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-30 00:55:01.029090 | orchestrator | Friday 30 May 2025 00:54:52 +0000 (0:00:00.624) 0:07:13.824 ************ 2025-05-30 00:55:01.029096 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:55:01.029102 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:55:01.029109 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:55:01.029115 | orchestrator | 2025-05-30 00:55:01.029121 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-30 00:55:01.029127 | orchestrator | Friday 30 May 2025 00:54:52 +0000 (0:00:00.383) 0:07:14.208 ************ 2025-05-30 00:55:01.029133 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.029139 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.029146 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.029152 | orchestrator | 2025-05-30 00:55:01.029158 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-30 00:55:01.029164 | orchestrator | Friday 30 May 2025 00:54:57 +0000 (0:00:04.527) 0:07:18.736 ************ 2025-05-30 00:55:01.029171 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:55:01.029177 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:55:01.029183 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:55:01.029189 | orchestrator | 2025-05-30 00:55:01.029195 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:55:01.029202 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-30 00:55:01.029208 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-30 00:55:01.029215 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-30 00:55:01.029221 | orchestrator | 2025-05-30 00:55:01.029227 | orchestrator | 2025-05-30 00:55:01.029241 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 00:55:01.029247 | orchestrator | Friday 30 May 2025 00:54:58 +0000 (0:00:01.105) 0:07:19.841 ************ 2025-05-30 00:55:01.029253 | orchestrator | =============================================================================== 2025-05-30 00:55:01.029259 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.36s 2025-05-30 00:55:01.029266 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.69s 2025-05-30 00:55:01.029272 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.33s 2025-05-30 00:55:01.029278 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.26s 2025-05-30 00:55:01.029284 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.72s 2025-05-30 00:55:01.029290 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 5.54s 2025-05-30 00:55:01.029297 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.49s 2025-05-30 00:55:01.029303 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.30s 2025-05-30 00:55:01.029309 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.25s 2025-05-30 00:55:01.029315 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.22s 2025-05-30 00:55:01.029321 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.14s 2025-05-30 00:55:01.029327 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 5.11s 2025-05-30 00:55:01.029333 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.08s 2025-05-30 00:55:01.029340 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.91s 2025-05-30 00:55:01.029346 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.89s 2025-05-30 00:55:01.029352 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.85s 2025-05-30 00:55:01.029358 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.82s 2025-05-30 00:55:01.029364 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.82s 2025-05-30 00:55:01.029370 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.57s 2025-05-30 00:55:01.029377 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.53s 2025-05-30 00:55:01.029386 | orchestrator | 2025-05-30 00:55:00 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:01.029393 | orchestrator | 2025-05-30 00:55:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:04.058005 | orchestrator | 2025-05-30 00:55:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:04.058424 | orchestrator | 2025-05-30 00:55:04 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:04.060585 | orchestrator | 2025-05-30 00:55:04 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:04.060610 | orchestrator | 2025-05-30 00:55:04 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:04.060622 | orchestrator | 2025-05-30 00:55:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:07.105478 | orchestrator | 2025-05-30 00:55:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:07.105597 | orchestrator | 2025-05-30 00:55:07 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:07.106156 | orchestrator | 2025-05-30 00:55:07 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:07.106447 | orchestrator | 2025-05-30 00:55:07 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:07.106499 | orchestrator | 2025-05-30 00:55:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:10.148976 | orchestrator | 2025-05-30 00:55:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:10.149084 | orchestrator | 2025-05-30 00:55:10 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:10.149295 | orchestrator | 2025-05-30 00:55:10 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:10.150069 | orchestrator | 2025-05-30 00:55:10 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:10.150092 | orchestrator | 2025-05-30 00:55:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:13.182609 | orchestrator | 2025-05-30 00:55:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:13.185070 | orchestrator | 2025-05-30 00:55:13 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:13.185101 | orchestrator | 2025-05-30 00:55:13 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:13.186933 | orchestrator | 2025-05-30 00:55:13 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:13.187027 | orchestrator | 2025-05-30 00:55:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:16.219267 | orchestrator | 2025-05-30 00:55:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:16.219370 | orchestrator | 2025-05-30 00:55:16 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:16.219719 | orchestrator | 2025-05-30 00:55:16 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:16.220560 | orchestrator | 2025-05-30 00:55:16 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:16.220582 | orchestrator | 2025-05-30 00:55:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:19.248765 | orchestrator | 2025-05-30 00:55:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:19.248980 | orchestrator | 2025-05-30 00:55:19 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:19.249615 | orchestrator | 2025-05-30 00:55:19 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:19.250482 | orchestrator | 2025-05-30 00:55:19 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:19.250574 | orchestrator | 2025-05-30 00:55:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:22.278002 | orchestrator | 2025-05-30 00:55:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:22.281092 | orchestrator | 2025-05-30 00:55:22 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:22.281556 | orchestrator | 2025-05-30 00:55:22 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:22.282523 | orchestrator | 2025-05-30 00:55:22 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:22.282550 | orchestrator | 2025-05-30 00:55:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:25.309934 | orchestrator | 2025-05-30 00:55:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:25.310057 | orchestrator | 2025-05-30 00:55:25 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:25.313953 | orchestrator | 2025-05-30 00:55:25 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:25.314249 | orchestrator | 2025-05-30 00:55:25 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:25.314263 | orchestrator | 2025-05-30 00:55:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:28.356358 | orchestrator | 2025-05-30 00:55:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:28.356603 | orchestrator | 2025-05-30 00:55:28 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:28.357120 | orchestrator | 2025-05-30 00:55:28 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:28.358629 | orchestrator | 2025-05-30 00:55:28 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:28.358652 | orchestrator | 2025-05-30 00:55:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:31.407302 | orchestrator | 2025-05-30 00:55:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:31.409419 | orchestrator | 2025-05-30 00:55:31 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:31.411208 | orchestrator | 2025-05-30 00:55:31 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:31.412857 | orchestrator | 2025-05-30 00:55:31 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:31.412919 | orchestrator | 2025-05-30 00:55:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:34.451092 | orchestrator | 2025-05-30 00:55:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:34.453411 | orchestrator | 2025-05-30 00:55:34 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:34.453482 | orchestrator | 2025-05-30 00:55:34 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:34.455806 | orchestrator | 2025-05-30 00:55:34 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:34.456142 | orchestrator | 2025-05-30 00:55:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:37.498760 | orchestrator | 2025-05-30 00:55:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:37.499919 | orchestrator | 2025-05-30 00:55:37 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:37.501451 | orchestrator | 2025-05-30 00:55:37 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:37.503738 | orchestrator | 2025-05-30 00:55:37 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:37.503764 | orchestrator | 2025-05-30 00:55:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:40.552565 | orchestrator | 2025-05-30 00:55:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:40.555041 | orchestrator | 2025-05-30 00:55:40 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:40.556410 | orchestrator | 2025-05-30 00:55:40 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:40.558211 | orchestrator | 2025-05-30 00:55:40 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:40.558236 | orchestrator | 2025-05-30 00:55:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:43.606170 | orchestrator | 2025-05-30 00:55:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:43.609306 | orchestrator | 2025-05-30 00:55:43 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:43.611777 | orchestrator | 2025-05-30 00:55:43 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:43.613729 | orchestrator | 2025-05-30 00:55:43 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:43.613773 | orchestrator | 2025-05-30 00:55:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:46.668509 | orchestrator | 2025-05-30 00:55:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:46.668618 | orchestrator | 2025-05-30 00:55:46 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:46.668842 | orchestrator | 2025-05-30 00:55:46 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:46.669700 | orchestrator | 2025-05-30 00:55:46 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:46.669722 | orchestrator | 2025-05-30 00:55:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:49.721800 | orchestrator | 2025-05-30 00:55:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:49.724238 | orchestrator | 2025-05-30 00:55:49 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:49.726948 | orchestrator | 2025-05-30 00:55:49 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:49.728527 | orchestrator | 2025-05-30 00:55:49 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:49.728853 | orchestrator | 2025-05-30 00:55:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:52.775453 | orchestrator | 2025-05-30 00:55:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:52.776498 | orchestrator | 2025-05-30 00:55:52 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:52.777700 | orchestrator | 2025-05-30 00:55:52 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:52.779458 | orchestrator | 2025-05-30 00:55:52 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:52.779480 | orchestrator | 2025-05-30 00:55:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:55.820296 | orchestrator | 2025-05-30 00:55:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:55.820407 | orchestrator | 2025-05-30 00:55:55 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:55.820956 | orchestrator | 2025-05-30 00:55:55 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:55.824088 | orchestrator | 2025-05-30 00:55:55 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:55.826243 | orchestrator | 2025-05-30 00:55:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:55:58.867332 | orchestrator | 2025-05-30 00:55:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:55:58.867422 | orchestrator | 2025-05-30 00:55:58 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:55:58.868171 | orchestrator | 2025-05-30 00:55:58 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:55:58.869379 | orchestrator | 2025-05-30 00:55:58 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:55:58.869426 | orchestrator | 2025-05-30 00:55:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:01.913418 | orchestrator | 2025-05-30 00:56:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:01.914391 | orchestrator | 2025-05-30 00:56:01 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:01.915716 | orchestrator | 2025-05-30 00:56:01 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:01.917694 | orchestrator | 2025-05-30 00:56:01 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:01.917723 | orchestrator | 2025-05-30 00:56:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:04.967605 | orchestrator | 2025-05-30 00:56:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:04.968478 | orchestrator | 2025-05-30 00:56:04 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:04.970162 | orchestrator | 2025-05-30 00:56:04 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:04.970865 | orchestrator | 2025-05-30 00:56:04 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:04.970911 | orchestrator | 2025-05-30 00:56:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:08.021572 | orchestrator | 2025-05-30 00:56:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:08.022807 | orchestrator | 2025-05-30 00:56:08 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:08.025862 | orchestrator | 2025-05-30 00:56:08 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:08.025938 | orchestrator | 2025-05-30 00:56:08 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:08.026119 | orchestrator | 2025-05-30 00:56:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:11.071803 | orchestrator | 2025-05-30 00:56:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:11.072669 | orchestrator | 2025-05-30 00:56:11 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:11.073716 | orchestrator | 2025-05-30 00:56:11 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:11.076207 | orchestrator | 2025-05-30 00:56:11 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:11.076238 | orchestrator | 2025-05-30 00:56:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:14.127525 | orchestrator | 2025-05-30 00:56:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:14.128795 | orchestrator | 2025-05-30 00:56:14 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:14.130618 | orchestrator | 2025-05-30 00:56:14 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:14.132047 | orchestrator | 2025-05-30 00:56:14 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:14.132395 | orchestrator | 2025-05-30 00:56:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:17.190083 | orchestrator | 2025-05-30 00:56:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:17.191195 | orchestrator | 2025-05-30 00:56:17 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:17.192767 | orchestrator | 2025-05-30 00:56:17 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:17.194084 | orchestrator | 2025-05-30 00:56:17 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:17.194108 | orchestrator | 2025-05-30 00:56:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:20.253172 | orchestrator | 2025-05-30 00:56:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:20.260615 | orchestrator | 2025-05-30 00:56:20 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:20.262749 | orchestrator | 2025-05-30 00:56:20 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:20.264721 | orchestrator | 2025-05-30 00:56:20 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:20.264946 | orchestrator | 2025-05-30 00:56:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:23.313557 | orchestrator | 2025-05-30 00:56:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:23.314863 | orchestrator | 2025-05-30 00:56:23 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:23.317164 | orchestrator | 2025-05-30 00:56:23 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:23.318996 | orchestrator | 2025-05-30 00:56:23 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:23.319022 | orchestrator | 2025-05-30 00:56:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:26.366593 | orchestrator | 2025-05-30 00:56:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:26.369069 | orchestrator | 2025-05-30 00:56:26 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:26.371736 | orchestrator | 2025-05-30 00:56:26 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:26.372100 | orchestrator | 2025-05-30 00:56:26 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:26.372129 | orchestrator | 2025-05-30 00:56:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:29.424354 | orchestrator | 2025-05-30 00:56:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:29.427196 | orchestrator | 2025-05-30 00:56:29 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:29.432629 | orchestrator | 2025-05-30 00:56:29 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:29.434555 | orchestrator | 2025-05-30 00:56:29 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:29.435108 | orchestrator | 2025-05-30 00:56:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:32.492418 | orchestrator | 2025-05-30 00:56:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:32.497062 | orchestrator | 2025-05-30 00:56:32 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:32.499218 | orchestrator | 2025-05-30 00:56:32 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:32.502211 | orchestrator | 2025-05-30 00:56:32 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:32.502240 | orchestrator | 2025-05-30 00:56:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:35.563052 | orchestrator | 2025-05-30 00:56:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:35.564015 | orchestrator | 2025-05-30 00:56:35 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:35.565587 | orchestrator | 2025-05-30 00:56:35 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:35.566739 | orchestrator | 2025-05-30 00:56:35 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:35.566858 | orchestrator | 2025-05-30 00:56:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:38.623599 | orchestrator | 2025-05-30 00:56:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:38.625729 | orchestrator | 2025-05-30 00:56:38 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:38.628030 | orchestrator | 2025-05-30 00:56:38 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:38.630578 | orchestrator | 2025-05-30 00:56:38 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:38.630782 | orchestrator | 2025-05-30 00:56:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:41.684407 | orchestrator | 2025-05-30 00:56:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:41.685773 | orchestrator | 2025-05-30 00:56:41 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:41.686519 | orchestrator | 2025-05-30 00:56:41 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:41.688120 | orchestrator | 2025-05-30 00:56:41 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:41.688161 | orchestrator | 2025-05-30 00:56:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:44.735586 | orchestrator | 2025-05-30 00:56:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:44.737521 | orchestrator | 2025-05-30 00:56:44 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:44.742112 | orchestrator | 2025-05-30 00:56:44 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:44.744695 | orchestrator | 2025-05-30 00:56:44 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:44.745048 | orchestrator | 2025-05-30 00:56:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:47.794362 | orchestrator | 2025-05-30 00:56:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:47.795551 | orchestrator | 2025-05-30 00:56:47 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:47.799063 | orchestrator | 2025-05-30 00:56:47 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:47.800945 | orchestrator | 2025-05-30 00:56:47 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:47.800982 | orchestrator | 2025-05-30 00:56:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:50.845346 | orchestrator | 2025-05-30 00:56:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:50.846720 | orchestrator | 2025-05-30 00:56:50 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:50.848520 | orchestrator | 2025-05-30 00:56:50 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:50.850225 | orchestrator | 2025-05-30 00:56:50 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:50.850283 | orchestrator | 2025-05-30 00:56:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:53.893253 | orchestrator | 2025-05-30 00:56:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:53.895160 | orchestrator | 2025-05-30 00:56:53 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:53.897191 | orchestrator | 2025-05-30 00:56:53 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:53.899096 | orchestrator | 2025-05-30 00:56:53 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:53.899123 | orchestrator | 2025-05-30 00:56:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:56.943474 | orchestrator | 2025-05-30 00:56:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:56.943843 | orchestrator | 2025-05-30 00:56:56 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:56:56.945672 | orchestrator | 2025-05-30 00:56:56 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:56:56.947031 | orchestrator | 2025-05-30 00:56:56 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:56:56.947130 | orchestrator | 2025-05-30 00:56:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:56:59.996506 | orchestrator | 2025-05-30 00:56:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:56:59.997635 | orchestrator | 2025-05-30 00:56:59 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:57:00.001260 | orchestrator | 2025-05-30 00:56:59 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:00.002120 | orchestrator | 2025-05-30 00:56:59 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:00.002355 | orchestrator | 2025-05-30 00:57:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:03.042323 | orchestrator | 2025-05-30 00:57:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:03.044334 | orchestrator | 2025-05-30 00:57:03 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:57:03.047075 | orchestrator | 2025-05-30 00:57:03 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:03.049378 | orchestrator | 2025-05-30 00:57:03 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:03.050450 | orchestrator | 2025-05-30 00:57:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:06.103731 | orchestrator | 2025-05-30 00:57:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:06.105400 | orchestrator | 2025-05-30 00:57:06 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:57:06.107606 | orchestrator | 2025-05-30 00:57:06 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:06.109391 | orchestrator | 2025-05-30 00:57:06 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:06.109433 | orchestrator | 2025-05-30 00:57:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:09.158112 | orchestrator | 2025-05-30 00:57:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:09.158544 | orchestrator | 2025-05-30 00:57:09 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state STARTED 2025-05-30 00:57:09.161455 | orchestrator | 2025-05-30 00:57:09 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:09.161579 | orchestrator | 2025-05-30 00:57:09 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:09.161603 | orchestrator | 2025-05-30 00:57:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:12.216741 | orchestrator | 2025-05-30 00:57:12.217029 | orchestrator | 2025-05-30 00:57:12.217048 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 00:57:12.217061 | orchestrator | 2025-05-30 00:57:12.217073 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 00:57:12.217084 | orchestrator | Friday 30 May 2025 00:55:02 +0000 (0:00:00.312) 0:00:00.312 ************ 2025-05-30 00:57:12.217095 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:57:12.217107 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:57:12.217117 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:57:12.217128 | orchestrator | 2025-05-30 00:57:12.217140 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 00:57:12.217151 | orchestrator | Friday 30 May 2025 00:55:02 +0000 (0:00:00.384) 0:00:00.696 ************ 2025-05-30 00:57:12.217176 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-30 00:57:12.217188 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-30 00:57:12.217200 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-30 00:57:12.217212 | orchestrator | 2025-05-30 00:57:12.217223 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-30 00:57:12.217234 | orchestrator | 2025-05-30 00:57:12.217245 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-30 00:57:12.217256 | orchestrator | Friday 30 May 2025 00:55:03 +0000 (0:00:00.300) 0:00:00.997 ************ 2025-05-30 00:57:12.217267 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:57:12.217278 | orchestrator | 2025-05-30 00:57:12.217289 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-30 00:57:12.217300 | orchestrator | Friday 30 May 2025 00:55:03 +0000 (0:00:00.731) 0:00:01.728 ************ 2025-05-30 00:57:12.217312 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-30 00:57:12.217323 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-30 00:57:12.217334 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-30 00:57:12.217345 | orchestrator | 2025-05-30 00:57:12.217356 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-30 00:57:12.217367 | orchestrator | Friday 30 May 2025 00:55:04 +0000 (0:00:00.818) 0:00:02.546 ************ 2025-05-30 00:57:12.217382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:57:12.217398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:57:12.217445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:57:12.217466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:57:12.217481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:57:12.217496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:57:12.217516 | orchestrator | 2025-05-30 00:57:12.217531 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-30 00:57:12.217545 | orchestrator | Friday 30 May 2025 00:55:06 +0000 (0:00:01.689) 0:00:04.236 ************ 2025-05-30 00:57:12.217559 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:57:12.217572 | orchestrator | 2025-05-30 00:57:12.217586 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-30 00:57:12.217599 | orchestrator | Friday 30 May 2025 00:55:07 +0000 (0:00:00.733) 0:00:04.969 ************ 2025-05-30 00:57:12.217623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:57:12.217643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:57:12.217658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:57:12.217674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:57:12.217703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:57:12.217723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:57:12.217737 | orchestrator | 2025-05-30 00:57:12.217750 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-30 00:57:12.217762 | orchestrator | Friday 30 May 2025 00:55:10 +0000 (0:00:03.005) 0:00:07.975 ************ 2025-05-30 00:57:12.217776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-30 00:57:12.217797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-30 00:57:12.217812 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:57:12.217831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-30 00:57:12.217852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-30 00:57:12.217866 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:57:12.217879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-30 00:57:12.217929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-30 00:57:12.217942 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:57:12.217953 | orchestrator | 2025-05-30 00:57:12.217964 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-30 00:57:12.217975 | orchestrator | Friday 30 May 2025 00:55:11 +0000 (0:00:01.002) 0:00:08.977 ************ 2025-05-30 00:57:12.217993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-30 00:57:12.218011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-30 00:57:12.218077 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:57:12.218089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-30 00:57:12.218108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-30 00:57:12.218120 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:57:12.218138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-30 00:57:12.218155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-30 00:57:12.218167 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:57:12.218178 | orchestrator | 2025-05-30 00:57:12.218189 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-30 00:57:12.218200 | orchestrator | Friday 30 May 2025 00:55:12 +0000 (0:00:01.027) 0:00:10.004 ************ 2025-05-30 00:57:12.218211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:57:12.218229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:57:12.218241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:57:12.218269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:57:12.218283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:57:12.218301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:57:12.218313 | orchestrator | 2025-05-30 00:57:12.218325 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-30 00:57:12.218336 | orchestrator | Friday 30 May 2025 00:55:14 +0000 (0:00:02.506) 0:00:12.511 ************ 2025-05-30 00:57:12.218346 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:57:12.218357 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:57:12.218368 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:57:12.218379 | orchestrator | 2025-05-30 00:57:12.218389 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-30 00:57:12.218400 | orchestrator | Friday 30 May 2025 00:55:18 +0000 (0:00:04.059) 0:00:16.571 ************ 2025-05-30 00:57:12.218411 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:57:12.218422 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:57:12.218432 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:57:12.218443 | orchestrator | 2025-05-30 00:57:12.218454 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-30 00:57:12.218465 | orchestrator | Friday 30 May 2025 00:55:20 +0000 (0:00:01.556) 0:00:18.127 ************ 2025-05-30 00:57:12.218484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:57:12.218502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:57:12.218520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-30 00:57:12.218532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:57:12.218550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:57:12.218567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-30 00:57:12.218585 | orchestrator | 2025-05-30 00:57:12.218596 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-30 00:57:12.218607 | orchestrator | Friday 30 May 2025 00:55:22 +0000 (0:00:02.114) 0:00:20.241 ************ 2025-05-30 00:57:12.218618 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:57:12.218629 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:57:12.218640 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:57:12.218651 | orchestrator | 2025-05-30 00:57:12.218661 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-30 00:57:12.218672 | orchestrator | Friday 30 May 2025 00:55:22 +0000 (0:00:00.326) 0:00:20.568 ************ 2025-05-30 00:57:12.218683 | orchestrator | 2025-05-30 00:57:12.218694 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-30 00:57:12.218705 | orchestrator | Friday 30 May 2025 00:55:22 +0000 (0:00:00.273) 0:00:20.842 ************ 2025-05-30 00:57:12.218715 | orchestrator | 2025-05-30 00:57:12.218726 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-30 00:57:12.218737 | orchestrator | Friday 30 May 2025 00:55:22 +0000 (0:00:00.071) 0:00:20.913 ************ 2025-05-30 00:57:12.218748 | orchestrator | 2025-05-30 00:57:12.218758 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-30 00:57:12.218769 | orchestrator | Friday 30 May 2025 00:55:23 +0000 (0:00:00.130) 0:00:21.044 ************ 2025-05-30 00:57:12.218780 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:57:12.218791 | orchestrator | 2025-05-30 00:57:12.218801 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-30 00:57:12.218812 | orchestrator | Friday 30 May 2025 00:55:23 +0000 (0:00:00.283) 0:00:21.327 ************ 2025-05-30 00:57:12.218823 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:57:12.218834 | orchestrator | 2025-05-30 00:57:12.218844 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-30 00:57:12.218855 | orchestrator | Friday 30 May 2025 00:55:23 +0000 (0:00:00.404) 0:00:21.732 ************ 2025-05-30 00:57:12.218866 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:57:12.218877 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:57:12.218888 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:57:12.218914 | orchestrator | 2025-05-30 00:57:12.218925 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-30 00:57:12.218936 | orchestrator | Friday 30 May 2025 00:55:54 +0000 (0:00:30.669) 0:00:52.402 ************ 2025-05-30 00:57:12.218947 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:57:12.218957 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:57:12.218968 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:57:12.218979 | orchestrator | 2025-05-30 00:57:12.218990 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-30 00:57:12.219000 | orchestrator | Friday 30 May 2025 00:56:57 +0000 (0:01:03.041) 0:01:55.443 ************ 2025-05-30 00:57:12.219011 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:57:12.219022 | orchestrator | 2025-05-30 00:57:12.219033 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-30 00:57:12.219044 | orchestrator | Friday 30 May 2025 00:56:58 +0000 (0:00:00.699) 0:01:56.142 ************ 2025-05-30 00:57:12.219054 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:57:12.219065 | orchestrator | 2025-05-30 00:57:12.219076 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-30 00:57:12.219087 | orchestrator | Friday 30 May 2025 00:57:00 +0000 (0:00:02.607) 0:01:58.750 ************ 2025-05-30 00:57:12.219105 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:57:12.219116 | orchestrator | 2025-05-30 00:57:12.219126 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-30 00:57:12.219137 | orchestrator | Friday 30 May 2025 00:57:03 +0000 (0:00:02.504) 0:02:01.255 ************ 2025-05-30 00:57:12.219148 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:57:12.219159 | orchestrator | 2025-05-30 00:57:12.219169 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-30 00:57:12.219180 | orchestrator | Friday 30 May 2025 00:57:06 +0000 (0:00:02.988) 0:02:04.243 ************ 2025-05-30 00:57:12.219191 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:57:12.219202 | orchestrator | 2025-05-30 00:57:12.219218 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:57:12.219230 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-30 00:57:12.219242 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:57:12.219253 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-30 00:57:12.219264 | orchestrator | 2025-05-30 00:57:12.219275 | orchestrator | 2025-05-30 00:57:12.219290 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 00:57:12.219302 | orchestrator | Friday 30 May 2025 00:57:09 +0000 (0:00:02.841) 0:02:07.085 ************ 2025-05-30 00:57:12.219312 | orchestrator | =============================================================================== 2025-05-30 00:57:12.219323 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 63.04s 2025-05-30 00:57:12.219334 | orchestrator | opensearch : Restart opensearch container ------------------------------ 30.67s 2025-05-30 00:57:12.219344 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.06s 2025-05-30 00:57:12.219355 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.01s 2025-05-30 00:57:12.219366 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.99s 2025-05-30 00:57:12.219377 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.84s 2025-05-30 00:57:12.219387 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.61s 2025-05-30 00:57:12.219398 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.51s 2025-05-30 00:57:12.219409 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.50s 2025-05-30 00:57:12.219420 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.11s 2025-05-30 00:57:12.219430 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.69s 2025-05-30 00:57:12.219441 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.56s 2025-05-30 00:57:12.219452 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.03s 2025-05-30 00:57:12.219462 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.00s 2025-05-30 00:57:12.219473 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.82s 2025-05-30 00:57:12.219484 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.73s 2025-05-30 00:57:12.219495 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.73s 2025-05-30 00:57:12.219506 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.70s 2025-05-30 00:57:12.219516 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.48s 2025-05-30 00:57:12.219527 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.40s 2025-05-30 00:57:12.219538 | orchestrator | 2025-05-30 00:57:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:12.219555 | orchestrator | 2025-05-30 00:57:12 | INFO  | Task e0dfe8bf-08fa-4e02-9756-0b7e1f6d50d9 is in state SUCCESS 2025-05-30 00:57:12.219566 | orchestrator | 2025-05-30 00:57:12 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:12.219577 | orchestrator | 2025-05-30 00:57:12 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:12.219588 | orchestrator | 2025-05-30 00:57:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:15.260447 | orchestrator | 2025-05-30 00:57:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:15.260541 | orchestrator | 2025-05-30 00:57:15 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:15.260556 | orchestrator | 2025-05-30 00:57:15 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:15.260568 | orchestrator | 2025-05-30 00:57:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:18.303361 | orchestrator | 2025-05-30 00:57:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:18.303771 | orchestrator | 2025-05-30 00:57:18 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:18.305939 | orchestrator | 2025-05-30 00:57:18 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:18.305987 | orchestrator | 2025-05-30 00:57:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:21.357039 | orchestrator | 2025-05-30 00:57:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:21.357613 | orchestrator | 2025-05-30 00:57:21 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:21.359024 | orchestrator | 2025-05-30 00:57:21 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:21.359053 | orchestrator | 2025-05-30 00:57:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:24.407311 | orchestrator | 2025-05-30 00:57:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:24.409607 | orchestrator | 2025-05-30 00:57:24 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:24.411723 | orchestrator | 2025-05-30 00:57:24 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:24.411772 | orchestrator | 2025-05-30 00:57:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:27.465063 | orchestrator | 2025-05-30 00:57:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:27.466158 | orchestrator | 2025-05-30 00:57:27 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:27.467994 | orchestrator | 2025-05-30 00:57:27 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:27.468051 | orchestrator | 2025-05-30 00:57:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:30.512621 | orchestrator | 2025-05-30 00:57:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:30.514995 | orchestrator | 2025-05-30 00:57:30 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:30.516853 | orchestrator | 2025-05-30 00:57:30 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:30.516884 | orchestrator | 2025-05-30 00:57:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:33.570948 | orchestrator | 2025-05-30 00:57:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:33.571712 | orchestrator | 2025-05-30 00:57:33 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:33.572217 | orchestrator | 2025-05-30 00:57:33 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:33.572242 | orchestrator | 2025-05-30 00:57:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:36.617411 | orchestrator | 2025-05-30 00:57:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:36.617496 | orchestrator | 2025-05-30 00:57:36 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:36.617511 | orchestrator | 2025-05-30 00:57:36 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:36.617523 | orchestrator | 2025-05-30 00:57:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:39.669965 | orchestrator | 2025-05-30 00:57:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:39.671376 | orchestrator | 2025-05-30 00:57:39 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:39.672890 | orchestrator | 2025-05-30 00:57:39 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:39.672957 | orchestrator | 2025-05-30 00:57:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:42.719886 | orchestrator | 2025-05-30 00:57:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:42.721109 | orchestrator | 2025-05-30 00:57:42 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:42.722969 | orchestrator | 2025-05-30 00:57:42 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:42.723000 | orchestrator | 2025-05-30 00:57:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:45.770827 | orchestrator | 2025-05-30 00:57:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:45.772230 | orchestrator | 2025-05-30 00:57:45 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:45.774111 | orchestrator | 2025-05-30 00:57:45 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:45.774134 | orchestrator | 2025-05-30 00:57:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:48.824572 | orchestrator | 2025-05-30 00:57:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:48.826075 | orchestrator | 2025-05-30 00:57:48 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:48.827732 | orchestrator | 2025-05-30 00:57:48 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:48.827759 | orchestrator | 2025-05-30 00:57:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:51.872967 | orchestrator | 2025-05-30 00:57:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:51.875630 | orchestrator | 2025-05-30 00:57:51 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:51.878223 | orchestrator | 2025-05-30 00:57:51 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:51.878258 | orchestrator | 2025-05-30 00:57:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:54.938973 | orchestrator | 2025-05-30 00:57:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:54.940640 | orchestrator | 2025-05-30 00:57:54 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:54.945039 | orchestrator | 2025-05-30 00:57:54 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:54.945082 | orchestrator | 2025-05-30 00:57:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:57:58.000385 | orchestrator | 2025-05-30 00:57:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:57:58.002451 | orchestrator | 2025-05-30 00:57:57 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:57:58.004260 | orchestrator | 2025-05-30 00:57:58 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:57:58.004615 | orchestrator | 2025-05-30 00:57:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:01.068563 | orchestrator | 2025-05-30 00:58:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:01.070418 | orchestrator | 2025-05-30 00:58:01 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:58:01.075077 | orchestrator | 2025-05-30 00:58:01 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:58:01.075132 | orchestrator | 2025-05-30 00:58:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:04.130655 | orchestrator | 2025-05-30 00:58:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:04.132457 | orchestrator | 2025-05-30 00:58:04 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:58:04.134200 | orchestrator | 2025-05-30 00:58:04 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:58:04.134547 | orchestrator | 2025-05-30 00:58:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:07.184368 | orchestrator | 2025-05-30 00:58:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:07.185620 | orchestrator | 2025-05-30 00:58:07 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:58:07.187516 | orchestrator | 2025-05-30 00:58:07 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:58:07.187591 | orchestrator | 2025-05-30 00:58:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:10.241717 | orchestrator | 2025-05-30 00:58:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:10.244791 | orchestrator | 2025-05-30 00:58:10 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:58:10.248501 | orchestrator | 2025-05-30 00:58:10 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:58:10.249053 | orchestrator | 2025-05-30 00:58:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:13.291301 | orchestrator | 2025-05-30 00:58:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:13.292437 | orchestrator | 2025-05-30 00:58:13 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:58:13.294200 | orchestrator | 2025-05-30 00:58:13 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:58:13.294245 | orchestrator | 2025-05-30 00:58:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:16.350126 | orchestrator | 2025-05-30 00:58:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:16.351531 | orchestrator | 2025-05-30 00:58:16 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:58:16.352791 | orchestrator | 2025-05-30 00:58:16 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state STARTED 2025-05-30 00:58:16.353135 | orchestrator | 2025-05-30 00:58:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:19.402865 | orchestrator | 2025-05-30 00:58:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:19.404117 | orchestrator | 2025-05-30 00:58:19 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:58:19.415516 | orchestrator | 2025-05-30 00:58:19.415591 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-30 00:58:19.415605 | orchestrator | 2025-05-30 00:58:19.415617 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-30 00:58:19.415628 | orchestrator | 2025-05-30 00:58:19.415639 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-30 00:58:19.415651 | orchestrator | Friday 30 May 2025 00:45:28 +0000 (0:00:01.423) 0:00:01.423 ************ 2025-05-30 00:58:19.415662 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.415675 | orchestrator | 2025-05-30 00:58:19.415685 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-30 00:58:19.415696 | orchestrator | Friday 30 May 2025 00:45:29 +0000 (0:00:01.152) 0:00:02.575 ************ 2025-05-30 00:58:19.415707 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 00:58:19.415719 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-30 00:58:19.415730 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-30 00:58:19.415740 | orchestrator | 2025-05-30 00:58:19.415751 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-30 00:58:19.415762 | orchestrator | Friday 30 May 2025 00:45:30 +0000 (0:00:00.573) 0:00:03.149 ************ 2025-05-30 00:58:19.415774 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.415786 | orchestrator | 2025-05-30 00:58:19.415797 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-30 00:58:19.415807 | orchestrator | Friday 30 May 2025 00:45:30 +0000 (0:00:00.969) 0:00:04.119 ************ 2025-05-30 00:58:19.415818 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.415829 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.415840 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.415850 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.415891 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.415938 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.415996 | orchestrator | 2025-05-30 00:58:19.416021 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-30 00:58:19.416032 | orchestrator | Friday 30 May 2025 00:45:32 +0000 (0:00:01.213) 0:00:05.332 ************ 2025-05-30 00:58:19.416043 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.416148 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.416182 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.416195 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.416207 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.416219 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.416231 | orchestrator | 2025-05-30 00:58:19.416243 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-30 00:58:19.416256 | orchestrator | Friday 30 May 2025 00:45:32 +0000 (0:00:00.754) 0:00:06.087 ************ 2025-05-30 00:58:19.416268 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.416280 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.416291 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.416326 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.416344 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.416364 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.416385 | orchestrator | 2025-05-30 00:58:19.416405 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-30 00:58:19.416424 | orchestrator | Friday 30 May 2025 00:45:34 +0000 (0:00:01.127) 0:00:07.214 ************ 2025-05-30 00:58:19.416435 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.416446 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.416457 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.416467 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.416478 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.416488 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.416499 | orchestrator | 2025-05-30 00:58:19.416510 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-30 00:58:19.416520 | orchestrator | Friday 30 May 2025 00:45:35 +0000 (0:00:00.995) 0:00:08.210 ************ 2025-05-30 00:58:19.416531 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.416541 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.416552 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.416562 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.416573 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.416583 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.416597 | orchestrator | 2025-05-30 00:58:19.416614 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-30 00:58:19.416632 | orchestrator | Friday 30 May 2025 00:45:35 +0000 (0:00:00.768) 0:00:08.979 ************ 2025-05-30 00:58:19.416649 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.416667 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.416684 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.416702 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.416715 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.416726 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.416737 | orchestrator | 2025-05-30 00:58:19.416748 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-30 00:58:19.416759 | orchestrator | Friday 30 May 2025 00:45:36 +0000 (0:00:00.930) 0:00:09.909 ************ 2025-05-30 00:58:19.416770 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.416781 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.416792 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.416803 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.416813 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.416824 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.416835 | orchestrator | 2025-05-30 00:58:19.416846 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-30 00:58:19.416856 | orchestrator | Friday 30 May 2025 00:45:37 +0000 (0:00:00.598) 0:00:10.507 ************ 2025-05-30 00:58:19.416867 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.416878 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.416889 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.416899 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.416949 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.416962 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.416973 | orchestrator | 2025-05-30 00:58:19.416999 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-30 00:58:19.417019 | orchestrator | Friday 30 May 2025 00:45:38 +0000 (0:00:01.105) 0:00:11.613 ************ 2025-05-30 00:58:19.417030 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 00:58:19.417041 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 00:58:19.417052 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 00:58:19.417062 | orchestrator | 2025-05-30 00:58:19.417073 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-30 00:58:19.417084 | orchestrator | Friday 30 May 2025 00:45:39 +0000 (0:00:00.719) 0:00:12.332 ************ 2025-05-30 00:58:19.417106 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.417117 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.417128 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.417138 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.417149 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.417159 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.417170 | orchestrator | 2025-05-30 00:58:19.417181 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-30 00:58:19.417192 | orchestrator | Friday 30 May 2025 00:45:40 +0000 (0:00:01.710) 0:00:14.043 ************ 2025-05-30 00:58:19.417202 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 00:58:19.417213 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 00:58:19.417224 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 00:58:19.417235 | orchestrator | 2025-05-30 00:58:19.417245 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-30 00:58:19.417256 | orchestrator | Friday 30 May 2025 00:45:43 +0000 (0:00:02.781) 0:00:16.825 ************ 2025-05-30 00:58:19.417267 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 00:58:19.417277 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 00:58:19.417288 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 00:58:19.417299 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.417309 | orchestrator | 2025-05-30 00:58:19.417320 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-30 00:58:19.417331 | orchestrator | Friday 30 May 2025 00:45:44 +0000 (0:00:00.356) 0:00:17.181 ************ 2025-05-30 00:58:19.417344 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-30 00:58:19.417358 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-30 00:58:19.417370 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-30 00:58:19.417380 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.417391 | orchestrator | 2025-05-30 00:58:19.417402 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-30 00:58:19.417413 | orchestrator | Friday 30 May 2025 00:45:44 +0000 (0:00:00.753) 0:00:17.934 ************ 2025-05-30 00:58:19.417426 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-30 00:58:19.417441 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-30 00:58:19.417452 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-30 00:58:19.417471 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.417482 | orchestrator | 2025-05-30 00:58:19.417493 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-30 00:58:19.417510 | orchestrator | Friday 30 May 2025 00:45:45 +0000 (0:00:00.305) 0:00:18.240 ************ 2025-05-30 00:58:19.417530 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-30 00:45:41.717963', 'end': '2025-05-30 00:45:41.968681', 'delta': '0:00:00.250718', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-30 00:58:19.417544 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-30 00:45:42.477472', 'end': '2025-05-30 00:45:42.733055', 'delta': '0:00:00.255583', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-30 00:58:19.417556 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-30 00:45:43.295435', 'end': '2025-05-30 00:45:43.572795', 'delta': '0:00:00.277360', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-30 00:58:19.417568 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.417579 | orchestrator | 2025-05-30 00:58:19.417590 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-30 00:58:19.417600 | orchestrator | Friday 30 May 2025 00:45:45 +0000 (0:00:00.182) 0:00:18.422 ************ 2025-05-30 00:58:19.417611 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.417622 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.417633 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.417643 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.417654 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.417664 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.417675 | orchestrator | 2025-05-30 00:58:19.417686 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-30 00:58:19.417697 | orchestrator | Friday 30 May 2025 00:45:46 +0000 (0:00:01.102) 0:00:19.525 ************ 2025-05-30 00:58:19.417765 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.417779 | orchestrator | 2025-05-30 00:58:19.417790 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-30 00:58:19.417801 | orchestrator | Friday 30 May 2025 00:45:47 +0000 (0:00:00.685) 0:00:20.210 ************ 2025-05-30 00:58:19.417812 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.417823 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.417841 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.417852 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.417863 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.417874 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.417884 | orchestrator | 2025-05-30 00:58:19.417895 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-30 00:58:19.417906 | orchestrator | Friday 30 May 2025 00:45:47 +0000 (0:00:00.801) 0:00:21.012 ************ 2025-05-30 00:58:19.417956 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.417968 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.417979 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.417989 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.418000 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.418011 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.418087 | orchestrator | 2025-05-30 00:58:19.418108 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-30 00:58:19.418129 | orchestrator | Friday 30 May 2025 00:45:49 +0000 (0:00:01.666) 0:00:22.679 ************ 2025-05-30 00:58:19.418148 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.418161 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.418172 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.418183 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.418193 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.418204 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.418215 | orchestrator | 2025-05-30 00:58:19.418225 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-30 00:58:19.418236 | orchestrator | Friday 30 May 2025 00:45:50 +0000 (0:00:00.717) 0:00:23.396 ************ 2025-05-30 00:58:19.418255 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.418266 | orchestrator | 2025-05-30 00:58:19.418284 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-30 00:58:19.418295 | orchestrator | Friday 30 May 2025 00:45:50 +0000 (0:00:00.165) 0:00:23.562 ************ 2025-05-30 00:58:19.418305 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.418316 | orchestrator | 2025-05-30 00:58:19.418327 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-30 00:58:19.418338 | orchestrator | Friday 30 May 2025 00:45:51 +0000 (0:00:00.660) 0:00:24.223 ************ 2025-05-30 00:58:19.418349 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.418360 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.418371 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.418381 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.418392 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.418402 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.418413 | orchestrator | 2025-05-30 00:58:19.418424 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-30 00:58:19.418435 | orchestrator | Friday 30 May 2025 00:45:52 +0000 (0:00:00.945) 0:00:25.168 ************ 2025-05-30 00:58:19.418445 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.418456 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.418467 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.418524 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.418536 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.418546 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.418557 | orchestrator | 2025-05-30 00:58:19.418568 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-30 00:58:19.418579 | orchestrator | Friday 30 May 2025 00:45:53 +0000 (0:00:01.269) 0:00:26.437 ************ 2025-05-30 00:58:19.418590 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.418601 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.418611 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.418622 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.418633 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.418653 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.418665 | orchestrator | 2025-05-30 00:58:19.418676 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-30 00:58:19.418687 | orchestrator | Friday 30 May 2025 00:45:54 +0000 (0:00:00.917) 0:00:27.355 ************ 2025-05-30 00:58:19.418697 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.418708 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.418719 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.418730 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.418741 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.418751 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.418762 | orchestrator | 2025-05-30 00:58:19.418773 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-30 00:58:19.418784 | orchestrator | Friday 30 May 2025 00:45:55 +0000 (0:00:01.176) 0:00:28.532 ************ 2025-05-30 00:58:19.418795 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.418805 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.418816 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.418827 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.418838 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.418849 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.418859 | orchestrator | 2025-05-30 00:58:19.418870 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-30 00:58:19.418881 | orchestrator | Friday 30 May 2025 00:45:56 +0000 (0:00:00.675) 0:00:29.207 ************ 2025-05-30 00:58:19.418892 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.418903 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.418938 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.418951 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.418961 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.418972 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.418983 | orchestrator | 2025-05-30 00:58:19.419049 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-30 00:58:19.419061 | orchestrator | Friday 30 May 2025 00:45:56 +0000 (0:00:00.872) 0:00:30.079 ************ 2025-05-30 00:58:19.419072 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.419082 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.419152 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.419164 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.419174 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.419185 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.419195 | orchestrator | 2025-05-30 00:58:19.419206 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-30 00:58:19.419217 | orchestrator | Friday 30 May 2025 00:45:57 +0000 (0:00:00.613) 0:00:30.692 ************ 2025-05-30 00:58:19.419229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5', 'scsi-SQEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part1', 'scsi-SQEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part14', 'scsi-SQEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part15', 'scsi-SQEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part16', 'scsi-SQEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.419395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-30-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.419421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770f164d-60f5-482d-a3bc-9c475531a1a8', 'scsi-SQEMU_QEMU_HARDDISK_770f164d-60f5-482d-a3bc-9c475531a1a8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770f164d-60f5-482d-a3bc-9c475531a1a8-part1', 'scsi-SQEMU_QEMU_HARDDISK_770f164d-60f5-482d-a3bc-9c475531a1a8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770f164d-60f5-482d-a3bc-9c475531a1a8-part14', 'scsi-SQEMU_QEMU_HARDDISK_770f164d-60f5-482d-a3bc-9c475531a1a8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770f164d-60f5-482d-a3bc-9c475531a1a8-part15', 'scsi-SQEMU_QEMU_HARDDISK_770f164d-60f5-482d-a3bc-9c475531a1a8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770f164d-60f5-482d-a3bc-9c475531a1a8-part16', 'scsi-SQEMU_QEMU_HARDDISK_770f164d-60f5-482d-a3bc-9c475531a1a8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.419597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-30-00-02-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.419608 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.419620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419708 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.419719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43763239-3247-473a-87fc-14ea183bb8af', 'scsi-SQEMU_QEMU_HARDDISK_43763239-3247-473a-87fc-14ea183bb8af'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43763239-3247-473a-87fc-14ea183bb8af-part1', 'scsi-SQEMU_QEMU_HARDDISK_43763239-3247-473a-87fc-14ea183bb8af-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43763239-3247-473a-87fc-14ea183bb8af-part14', 'scsi-SQEMU_QEMU_HARDDISK_43763239-3247-473a-87fc-14ea183bb8af-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43763239-3247-473a-87fc-14ea183bb8af-part15', 'scsi-SQEMU_QEMU_HARDDISK_43763239-3247-473a-87fc-14ea183bb8af-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43763239-3247-473a-87fc-14ea183bb8af-part16', 'scsi-SQEMU_QEMU_HARDDISK_43763239-3247-473a-87fc-14ea183bb8af-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start':2025-05-30 00:58:19 | INFO  | Task 3e3bb1ef-f820-458f-9d16-87e9a792aba0 is in state SUCCESS 2025-05-30 00:58:19.419775 | orchestrator | '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.419791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-30-00-02-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.419810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d0cb66e--f8af--5d02--a2d6--05303feeced3-osd--block--6d0cb66e--f8af--5d02--a2d6--05303feeced3', 'dm-uuid-LVM-6gGdc0okQLoucjNi2S2OddqQDlbW0RvHPpk2V3WdjgwQEu9HjhnqN54cgy7JnKBh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f43ff32d--4fc4--5ece--8353--26072ce1c913-osd--block--f43ff32d--4fc4--5ece--8353--26072ce1c913', 'dm-uuid-LVM-oVcraZljfeh9epu3EEifpwHyixceNwoqa9zJlkL2TFSDWxfMwlRwPHl0tRcLM9oW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419868 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.419887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.419993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420039 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50b3064c--7478--543e--8abf--661fdbdc95ce-osd--block--50b3064c--7478--543e--8abf--661fdbdc95ce', 'dm-uuid-LVM-clYUmWVvX7ZWgFP0x00l3EywtfJzxZ3QH6v7nuu2S4cO7xXwGwzjv0kUJx1PnkCS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--749c70bc--bf8f--56a3--a425--711d4530659c-osd--block--749c70bc--bf8f--56a3--a425--711d4530659c', 'dm-uuid-LVM-9tcTyJjgk0ux8ZxJM3Z0I5BG0kFkt0svje0LYMMVEH8PRz1Nvle2Fu6f0rm3wc0t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d', 'scsi-SQEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part1', 'scsi-SQEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part14', 'scsi-SQEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part15', 'scsi-SQEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part16', 'scsi-SQEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6d0cb66e--f8af--5d02--a2d6--05303feeced3-osd--block--6d0cb66e--f8af--5d02--a2d6--05303feeced3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Wwhvba-QkQ4-dO70-O1Zv-8C8U-YLVi-cGVH2f', 'scsi-0QEMU_QEMU_HARDDISK_5232ed07-4d85-4988-9bc7-7d761a8f0a42', 'scsi-SQEMU_QEMU_HARDDISK_5232ed07-4d85-4988-9bc7-7d761a8f0a42'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f43ff32d--4fc4--5ece--8353--26072ce1c913-osd--block--f43ff32d--4fc4--5ece--8353--26072ce1c913'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pxjZBb-BbHY-jLWM-Qc7v-fVZn-mI2Q-zDPIaJ', 'scsi-0QEMU_QEMU_HARDDISK_d57cbd6a-67f1-4040-83cf-671f4c3c6a1f', 'scsi-SQEMU_QEMU_HARDDISK_d57cbd6a-67f1-4040-83cf-671f4c3c6a1f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76f37bde-13ed-44ba-8084-a2417c9798d9', 'scsi-SQEMU_QEMU_HARDDISK_76f37bde-13ed-44ba-8084-a2417c9798d9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420197 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-30-00-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51', 'scsi-SQEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part1', 'scsi-SQEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part14', 'scsi-SQEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part15', 'scsi-SQEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part16', 'scsi-SQEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420324 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.420336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--50b3064c--7478--543e--8abf--661fdbdc95ce-osd--block--50b3064c--7478--543e--8abf--661fdbdc95ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dMDbY3-Hov1-9Nml-qChJ-wrEc-m9dI-tyWzXY', 'scsi-0QEMU_QEMU_HARDDISK_173bbd31-d008-4662-8aea-7cfb1ab21884', 'scsi-SQEMU_QEMU_HARDDISK_173bbd31-d008-4662-8aea-7cfb1ab21884'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--749c70bc--bf8f--56a3--a425--711d4530659c-osd--block--749c70bc--bf8f--56a3--a425--711d4530659c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XxD8WH-u833-sOKg-41RQ-ZRE2-o4Sl-UJcJBx', 'scsi-0QEMU_QEMU_HARDDISK_fd28e93c-f7f0-4d71-9af0-3817aadd609f', 'scsi-SQEMU_QEMU_HARDDISK_fd28e93c-f7f0-4d71-9af0-3817aadd609f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fcd55a48-2b4a-45aa-bb97-767fc341b1ef', 'scsi-SQEMU_QEMU_HARDDISK_fcd55a48-2b4a-45aa-bb97-767fc341b1ef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-30-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2ff0e7ee--f669--5460--a216--2d1fc13a4a65-osd--block--2ff0e7ee--f669--5460--a216--2d1fc13a4a65', 'dm-uuid-LVM-IKly217p7QCeAB0hTFdCpSZ2HK08iqU6GSmsMOKjgZrcxn43YbP1UbpiR3ETkvpb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420414 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.420425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dfef1ad9--1307--56b8--9770--fa52c7fc01ce-osd--block--dfef1ad9--1307--56b8--9770--fa52c7fc01ce', 'dm-uuid-LVM-KgH2KkzxMOT7QUU348SQZWeoBKbjLTJfQYHob8FVgbG4NbFW7rda7XOde2NimkI9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 00:58:19.420781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f', 'scsi-SQEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part1', 'scsi-SQEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part14', 'scsi-SQEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part15', 'scsi-SQEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part16', 'scsi-SQEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2ff0e7ee--f669--5460--a216--2d1fc13a4a65-osd--block--2ff0e7ee--f669--5460--a216--2d1fc13a4a65'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4Pxju3-iWEI-HwrN-dRCG-LMHZ-deMX-U8lGuZ', 'scsi-0QEMU_QEMU_HARDDISK_2529d57e-ffb4-494c-a22f-a2bb1703f8b2', 'scsi-SQEMU_QEMU_HARDDISK_2529d57e-ffb4-494c-a22f-a2bb1703f8b2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dfef1ad9--1307--56b8--9770--fa52c7fc01ce-osd--block--dfef1ad9--1307--56b8--9770--fa52c7fc01ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RJOnck-dGqo-ezFU-Y50Z-T98W-7F3K-0LWt4p', 'scsi-0QEMU_QEMU_HARDDISK_c7216231-2c47-48eb-b4a1-b98b10008028', 'scsi-SQEMU_QEMU_HARDDISK_c7216231-2c47-48eb-b4a1-b98b10008028'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8d1e0c18-9aac-4f03-b30e-87512c271b47', 'scsi-SQEMU_QEMU_HARDDISK_8d1e0c18-9aac-4f03-b30e-87512c271b47'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-30-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 00:58:19.420861 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.420872 | orchestrator | 2025-05-30 00:58:19.420883 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-30 00:58:19.420894 | orchestrator | Friday 30 May 2025 00:45:58 +0000 (0:00:01.429) 0:00:32.122 ************ 2025-05-30 00:58:19.420905 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.421034 | orchestrator | 2025-05-30 00:58:19.421058 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-30 00:58:19.421077 | orchestrator | Friday 30 May 2025 00:45:59 +0000 (0:00:00.243) 0:00:32.365 ************ 2025-05-30 00:58:19.421136 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.421153 | orchestrator | 2025-05-30 00:58:19.421186 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-30 00:58:19.421197 | orchestrator | Friday 30 May 2025 00:45:59 +0000 (0:00:00.167) 0:00:32.533 ************ 2025-05-30 00:58:19.421207 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.421243 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.421264 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.421274 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.421283 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.421293 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.421303 | orchestrator | 2025-05-30 00:58:19.421313 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-30 00:58:19.421322 | orchestrator | Friday 30 May 2025 00:46:00 +0000 (0:00:00.724) 0:00:33.257 ************ 2025-05-30 00:58:19.421332 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.421342 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.421352 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.421362 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.421371 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.421381 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.421390 | orchestrator | 2025-05-30 00:58:19.421400 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-30 00:58:19.421409 | orchestrator | Friday 30 May 2025 00:46:01 +0000 (0:00:01.284) 0:00:34.542 ************ 2025-05-30 00:58:19.421419 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.421428 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.421438 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.421447 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.421456 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.421466 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.421475 | orchestrator | 2025-05-30 00:58:19.421484 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-30 00:58:19.421494 | orchestrator | Friday 30 May 2025 00:46:02 +0000 (0:00:00.681) 0:00:35.223 ************ 2025-05-30 00:58:19.421529 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.421540 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.421550 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.421559 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.421569 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.421579 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.421588 | orchestrator | 2025-05-30 00:58:19.421598 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-30 00:58:19.421608 | orchestrator | Friday 30 May 2025 00:46:03 +0000 (0:00:01.094) 0:00:36.318 ************ 2025-05-30 00:58:19.421617 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.421627 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.421636 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.421646 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.421655 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.421665 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.421696 | orchestrator | 2025-05-30 00:58:19.421729 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-30 00:58:19.421740 | orchestrator | Friday 30 May 2025 00:46:03 +0000 (0:00:00.803) 0:00:37.122 ************ 2025-05-30 00:58:19.421750 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.421786 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.421797 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.421806 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.421816 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.421825 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.421835 | orchestrator | 2025-05-30 00:58:19.421844 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-30 00:58:19.421854 | orchestrator | Friday 30 May 2025 00:46:05 +0000 (0:00:01.185) 0:00:38.307 ************ 2025-05-30 00:58:19.421864 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.421873 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.421883 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.421892 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.421902 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.421973 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.422003 | orchestrator | 2025-05-30 00:58:19.422060 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-30 00:58:19.422082 | orchestrator | Friday 30 May 2025 00:46:06 +0000 (0:00:00.860) 0:00:39.168 ************ 2025-05-30 00:58:19.422093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 00:58:19.422109 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-30 00:58:19.422119 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-30 00:58:19.422128 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-30 00:58:19.422138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 00:58:19.422147 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-30 00:58:19.422157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-30 00:58:19.422166 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-30 00:58:19.422175 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 00:58:19.422185 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.422194 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.422204 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-30 00:58:19.422213 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.422222 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-30 00:58:19.422232 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-30 00:58:19.422241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-30 00:58:19.422250 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-30 00:58:19.422260 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-30 00:58:19.422269 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.422279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-30 00:58:19.422288 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.422298 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-30 00:58:19.422307 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-30 00:58:19.422316 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.422326 | orchestrator | 2025-05-30 00:58:19.422336 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-30 00:58:19.422345 | orchestrator | Friday 30 May 2025 00:46:08 +0000 (0:00:02.540) 0:00:41.709 ************ 2025-05-30 00:58:19.422355 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 00:58:19.422364 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-30 00:58:19.422374 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 00:58:19.422384 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-30 00:58:19.422393 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-30 00:58:19.422403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-30 00:58:19.422412 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 00:58:19.422422 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-30 00:58:19.422431 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.422441 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.422450 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-30 00:58:19.422460 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-30 00:58:19.422469 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-30 00:58:19.422478 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-30 00:58:19.422487 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.422495 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-30 00:58:19.422503 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-30 00:58:19.422516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-30 00:58:19.422524 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.422531 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-30 00:58:19.422539 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-30 00:58:19.422547 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.422555 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-30 00:58:19.422562 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.422570 | orchestrator | 2025-05-30 00:58:19.422578 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-30 00:58:19.422586 | orchestrator | Friday 30 May 2025 00:46:10 +0000 (0:00:01.893) 0:00:43.602 ************ 2025-05-30 00:58:19.422594 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-30 00:58:19.422602 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 00:58:19.422609 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-30 00:58:19.422617 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-30 00:58:19.422625 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-30 00:58:19.422633 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-30 00:58:19.422640 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-30 00:58:19.422648 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-30 00:58:19.422656 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-30 00:58:19.422664 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-30 00:58:19.422671 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-30 00:58:19.422679 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-30 00:58:19.422687 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-30 00:58:19.422694 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-30 00:58:19.422702 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-30 00:58:19.422710 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-30 00:58:19.422718 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-30 00:58:19.422730 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-30 00:58:19.422738 | orchestrator | 2025-05-30 00:58:19.422750 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-30 00:58:19.422758 | orchestrator | Friday 30 May 2025 00:46:15 +0000 (0:00:05.463) 0:00:49.066 ************ 2025-05-30 00:58:19.422765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 00:58:19.422773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 00:58:19.422781 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 00:58:19.422804 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-30 00:58:19.422812 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-30 00:58:19.422820 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-30 00:58:19.422828 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.422835 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-30 00:58:19.422843 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-30 00:58:19.422851 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-30 00:58:19.422859 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-30 00:58:19.422867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-30 00:58:19.422874 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-30 00:58:19.422882 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.422890 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-30 00:58:19.422898 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-30 00:58:19.422905 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-30 00:58:19.422937 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.422946 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.422985 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.422993 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-30 00:58:19.423001 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-30 00:58:19.423009 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-30 00:58:19.423017 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.423025 | orchestrator | 2025-05-30 00:58:19.423033 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-30 00:58:19.423041 | orchestrator | Friday 30 May 2025 00:46:17 +0000 (0:00:01.087) 0:00:50.153 ************ 2025-05-30 00:58:19.423049 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 00:58:19.423057 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 00:58:19.423065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 00:58:19.423073 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-30 00:58:19.423085 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-30 00:58:19.423098 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-30 00:58:19.423109 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.423120 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-30 00:58:19.423133 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-30 00:58:19.423146 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-30 00:58:19.423157 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.423165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-30 00:58:19.423173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-30 00:58:19.423181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-30 00:58:19.423188 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.423196 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-30 00:58:19.423204 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-30 00:58:19.423212 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-30 00:58:19.423220 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.423227 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.423235 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-30 00:58:19.423243 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-30 00:58:19.423251 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-30 00:58:19.423258 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.423266 | orchestrator | 2025-05-30 00:58:19.423274 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-30 00:58:19.423282 | orchestrator | Friday 30 May 2025 00:46:18 +0000 (0:00:01.032) 0:00:51.186 ************ 2025-05-30 00:58:19.423290 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-30 00:58:19.423297 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-30 00:58:19.423306 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-30 00:58:19.423314 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-30 00:58:19.423321 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-30 00:58:19.423329 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-30 00:58:19.423337 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-30 00:58:19.423352 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-30 00:58:19.423375 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-30 00:58:19.423453 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-30 00:58:19.423462 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-30 00:58:19.423470 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-30 00:58:19.423478 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-30 00:58:19.423486 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-30 00:58:19.423493 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-30 00:58:19.423501 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.423509 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.423517 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-30 00:58:19.423525 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-30 00:58:19.423532 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-30 00:58:19.423540 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.423548 | orchestrator | 2025-05-30 00:58:19.423555 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-30 00:58:19.423563 | orchestrator | Friday 30 May 2025 00:46:19 +0000 (0:00:01.285) 0:00:52.472 ************ 2025-05-30 00:58:19.423571 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.423579 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.423587 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.423594 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.423602 | orchestrator | 2025-05-30 00:58:19.423610 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-30 00:58:19.423618 | orchestrator | Friday 30 May 2025 00:46:20 +0000 (0:00:01.206) 0:00:53.678 ************ 2025-05-30 00:58:19.423626 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.423634 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.423642 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.423649 | orchestrator | 2025-05-30 00:58:19.423657 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-30 00:58:19.423665 | orchestrator | Friday 30 May 2025 00:46:21 +0000 (0:00:00.526) 0:00:54.204 ************ 2025-05-30 00:58:19.423673 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.423680 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.423688 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.423696 | orchestrator | 2025-05-30 00:58:19.423704 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-30 00:58:19.423711 | orchestrator | Friday 30 May 2025 00:46:21 +0000 (0:00:00.708) 0:00:54.913 ************ 2025-05-30 00:58:19.423719 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.423727 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.423735 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.423742 | orchestrator | 2025-05-30 00:58:19.423750 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-30 00:58:19.423758 | orchestrator | Friday 30 May 2025 00:46:22 +0000 (0:00:00.649) 0:00:55.562 ************ 2025-05-30 00:58:19.423766 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.423774 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.423781 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.423789 | orchestrator | 2025-05-30 00:58:19.423803 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-30 00:58:19.423810 | orchestrator | Friday 30 May 2025 00:46:23 +0000 (0:00:00.942) 0:00:56.504 ************ 2025-05-30 00:58:19.423818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.423826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.423834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.423841 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.423849 | orchestrator | 2025-05-30 00:58:19.423857 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-30 00:58:19.423864 | orchestrator | Friday 30 May 2025 00:46:24 +0000 (0:00:00.696) 0:00:57.200 ************ 2025-05-30 00:58:19.423872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.423880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.423887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.423895 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.423903 | orchestrator | 2025-05-30 00:58:19.423929 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-30 00:58:19.423943 | orchestrator | Friday 30 May 2025 00:46:24 +0000 (0:00:00.518) 0:00:57.719 ************ 2025-05-30 00:58:19.423951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.423959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.423966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.423974 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.423982 | orchestrator | 2025-05-30 00:58:19.423990 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.423997 | orchestrator | Friday 30 May 2025 00:46:25 +0000 (0:00:01.028) 0:00:58.748 ************ 2025-05-30 00:58:19.424005 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.424013 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.424021 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.424028 | orchestrator | 2025-05-30 00:58:19.424041 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-30 00:58:19.424050 | orchestrator | Friday 30 May 2025 00:46:26 +0000 (0:00:00.608) 0:00:59.356 ************ 2025-05-30 00:58:19.424058 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-30 00:58:19.424066 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-30 00:58:19.424074 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-30 00:58:19.424082 | orchestrator | 2025-05-30 00:58:19.424089 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-30 00:58:19.424097 | orchestrator | Friday 30 May 2025 00:46:27 +0000 (0:00:01.494) 0:01:00.851 ************ 2025-05-30 00:58:19.424105 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.424172 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.424189 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.424197 | orchestrator | 2025-05-30 00:58:19.424205 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.424213 | orchestrator | Friday 30 May 2025 00:46:28 +0000 (0:00:00.701) 0:01:01.553 ************ 2025-05-30 00:58:19.424221 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.424228 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.424236 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.424244 | orchestrator | 2025-05-30 00:58:19.424252 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-30 00:58:19.424260 | orchestrator | Friday 30 May 2025 00:46:29 +0000 (0:00:00.653) 0:01:02.207 ************ 2025-05-30 00:58:19.424268 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-30 00:58:19.424276 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.424283 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-30 00:58:19.424291 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.424305 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-30 00:58:19.424313 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.424321 | orchestrator | 2025-05-30 00:58:19.424329 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-30 00:58:19.424337 | orchestrator | Friday 30 May 2025 00:46:30 +0000 (0:00:01.057) 0:01:03.264 ************ 2025-05-30 00:58:19.424345 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.424353 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.424361 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.424368 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.424376 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.424385 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.424392 | orchestrator | 2025-05-30 00:58:19.424400 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-30 00:58:19.424408 | orchestrator | Friday 30 May 2025 00:46:30 +0000 (0:00:00.769) 0:01:04.034 ************ 2025-05-30 00:58:19.424416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.424424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.424432 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-30 00:58:19.424440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.424448 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.424455 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-30 00:58:19.424463 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-30 00:58:19.424471 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-30 00:58:19.424479 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.424487 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-30 00:58:19.424494 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-30 00:58:19.424502 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.424510 | orchestrator | 2025-05-30 00:58:19.424518 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-30 00:58:19.424526 | orchestrator | Friday 30 May 2025 00:46:31 +0000 (0:00:00.592) 0:01:04.626 ************ 2025-05-30 00:58:19.424534 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.424542 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.424549 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.424557 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.424565 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.424573 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.424581 | orchestrator | 2025-05-30 00:58:19.424589 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-30 00:58:19.424597 | orchestrator | Friday 30 May 2025 00:46:32 +0000 (0:00:00.697) 0:01:05.324 ************ 2025-05-30 00:58:19.424605 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 00:58:19.424612 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 00:58:19.424620 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 00:58:19.424628 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-30 00:58:19.424636 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-30 00:58:19.424644 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-30 00:58:19.424652 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-30 00:58:19.424665 | orchestrator | 2025-05-30 00:58:19.424673 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-30 00:58:19.424686 | orchestrator | Friday 30 May 2025 00:46:33 +0000 (0:00:01.198) 0:01:06.522 ************ 2025-05-30 00:58:19.424699 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 00:58:19.424707 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 00:58:19.424715 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 00:58:19.424723 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-30 00:58:19.424731 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-30 00:58:19.424738 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-30 00:58:19.424746 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-30 00:58:19.424754 | orchestrator | 2025-05-30 00:58:19.424762 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-30 00:58:19.424769 | orchestrator | Friday 30 May 2025 00:46:35 +0000 (0:00:02.085) 0:01:08.607 ************ 2025-05-30 00:58:19.424778 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.424786 | orchestrator | 2025-05-30 00:58:19.424794 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-30 00:58:19.424802 | orchestrator | Friday 30 May 2025 00:46:36 +0000 (0:00:01.388) 0:01:09.996 ************ 2025-05-30 00:58:19.424810 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.424818 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.424825 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.424833 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.424841 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.424849 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.424856 | orchestrator | 2025-05-30 00:58:19.424864 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-30 00:58:19.424872 | orchestrator | Friday 30 May 2025 00:46:37 +0000 (0:00:01.113) 0:01:11.110 ************ 2025-05-30 00:58:19.424880 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.424888 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.424895 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.424903 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.424989 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.425022 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.425032 | orchestrator | 2025-05-30 00:58:19.425039 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-30 00:58:19.425047 | orchestrator | Friday 30 May 2025 00:46:39 +0000 (0:00:01.230) 0:01:12.341 ************ 2025-05-30 00:58:19.425055 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.425063 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.425071 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.425079 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.425087 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.425095 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.425102 | orchestrator | 2025-05-30 00:58:19.425110 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-30 00:58:19.425118 | orchestrator | Friday 30 May 2025 00:46:40 +0000 (0:00:01.517) 0:01:13.859 ************ 2025-05-30 00:58:19.425126 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.425134 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.425141 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.425149 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.425157 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.425165 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.425173 | orchestrator | 2025-05-30 00:58:19.425187 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-30 00:58:19.425195 | orchestrator | Friday 30 May 2025 00:46:41 +0000 (0:00:01.140) 0:01:15.000 ************ 2025-05-30 00:58:19.425202 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.425210 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.425218 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.425226 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.425234 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.425242 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.425250 | orchestrator | 2025-05-30 00:58:19.425258 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-30 00:58:19.425265 | orchestrator | Friday 30 May 2025 00:46:43 +0000 (0:00:01.407) 0:01:16.408 ************ 2025-05-30 00:58:19.425273 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.425281 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.425289 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.425296 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.425304 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.425312 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.425320 | orchestrator | 2025-05-30 00:58:19.425328 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-30 00:58:19.425335 | orchestrator | Friday 30 May 2025 00:46:44 +0000 (0:00:00.866) 0:01:17.274 ************ 2025-05-30 00:58:19.425343 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.425351 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.425359 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.425366 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.425374 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.425382 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.425390 | orchestrator | 2025-05-30 00:58:19.425397 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-30 00:58:19.425405 | orchestrator | Friday 30 May 2025 00:46:45 +0000 (0:00:00.914) 0:01:18.189 ************ 2025-05-30 00:58:19.425413 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.425421 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.425428 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.425436 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.425444 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.425452 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.425459 | orchestrator | 2025-05-30 00:58:19.425473 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-30 00:58:19.425484 | orchestrator | Friday 30 May 2025 00:46:45 +0000 (0:00:00.740) 0:01:18.930 ************ 2025-05-30 00:58:19.425491 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.425497 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.425504 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.425510 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.425517 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.425524 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.425530 | orchestrator | 2025-05-30 00:58:19.425537 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-30 00:58:19.425543 | orchestrator | Friday 30 May 2025 00:46:46 +0000 (0:00:00.920) 0:01:19.851 ************ 2025-05-30 00:58:19.425550 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.425557 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.425563 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.425570 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.425576 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.425583 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.425589 | orchestrator | 2025-05-30 00:58:19.425596 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-30 00:58:19.425603 | orchestrator | Friday 30 May 2025 00:46:47 +0000 (0:00:00.597) 0:01:20.448 ************ 2025-05-30 00:58:19.425614 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.425621 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.425627 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.425634 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.425641 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.425647 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.425654 | orchestrator | 2025-05-30 00:58:19.425660 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-30 00:58:19.425667 | orchestrator | Friday 30 May 2025 00:46:48 +0000 (0:00:01.433) 0:01:21.882 ************ 2025-05-30 00:58:19.425674 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.425680 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.425687 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.425693 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.425700 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.425706 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.425713 | orchestrator | 2025-05-30 00:58:19.425720 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-30 00:58:19.425727 | orchestrator | Friday 30 May 2025 00:46:49 +0000 (0:00:00.622) 0:01:22.504 ************ 2025-05-30 00:58:19.425733 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.425740 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.425746 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.425753 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.425759 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.425766 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.425772 | orchestrator | 2025-05-30 00:58:19.425779 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-30 00:58:19.425786 | orchestrator | Friday 30 May 2025 00:46:50 +0000 (0:00:00.791) 0:01:23.296 ************ 2025-05-30 00:58:19.425792 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.425799 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.425805 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.425812 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.425818 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.425825 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.425832 | orchestrator | 2025-05-30 00:58:19.425838 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-30 00:58:19.425845 | orchestrator | Friday 30 May 2025 00:46:50 +0000 (0:00:00.727) 0:01:24.024 ************ 2025-05-30 00:58:19.425852 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.425858 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.425865 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.425871 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.425878 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.425884 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.425891 | orchestrator | 2025-05-30 00:58:19.425898 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-30 00:58:19.425904 | orchestrator | Friday 30 May 2025 00:46:52 +0000 (0:00:01.212) 0:01:25.237 ************ 2025-05-30 00:58:19.425926 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.425934 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.425941 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.425947 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.425954 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.425960 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.425967 | orchestrator | 2025-05-30 00:58:19.425973 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-30 00:58:19.425980 | orchestrator | Friday 30 May 2025 00:46:52 +0000 (0:00:00.813) 0:01:26.050 ************ 2025-05-30 00:58:19.425986 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.425993 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.425999 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.426006 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.426385 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.426398 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.426405 | orchestrator | 2025-05-30 00:58:19.426412 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-30 00:58:19.426419 | orchestrator | Friday 30 May 2025 00:46:53 +0000 (0:00:00.850) 0:01:26.901 ************ 2025-05-30 00:58:19.426425 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.426432 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.426439 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.426445 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.426452 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.426458 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.426465 | orchestrator | 2025-05-30 00:58:19.426472 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-30 00:58:19.426478 | orchestrator | Friday 30 May 2025 00:46:54 +0000 (0:00:00.601) 0:01:27.503 ************ 2025-05-30 00:58:19.426485 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.426492 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.426498 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.426505 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.426512 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.426528 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.426535 | orchestrator | 2025-05-30 00:58:19.426548 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-30 00:58:19.426555 | orchestrator | Friday 30 May 2025 00:46:55 +0000 (0:00:00.832) 0:01:28.335 ************ 2025-05-30 00:58:19.426561 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.426568 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.426575 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.426581 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.426588 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.426594 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.426601 | orchestrator | 2025-05-30 00:58:19.426608 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-30 00:58:19.426614 | orchestrator | Friday 30 May 2025 00:46:55 +0000 (0:00:00.686) 0:01:29.022 ************ 2025-05-30 00:58:19.426621 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.426627 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.426634 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.426641 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.426647 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.426654 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.426660 | orchestrator | 2025-05-30 00:58:19.426667 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-30 00:58:19.426674 | orchestrator | Friday 30 May 2025 00:46:56 +0000 (0:00:00.847) 0:01:29.869 ************ 2025-05-30 00:58:19.426680 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.426687 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.426693 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.426700 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.426706 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.426713 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.426720 | orchestrator | 2025-05-30 00:58:19.426726 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-30 00:58:19.426733 | orchestrator | Friday 30 May 2025 00:46:57 +0000 (0:00:00.744) 0:01:30.614 ************ 2025-05-30 00:58:19.426740 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.426746 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.426753 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.426759 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.426766 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.426772 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.426779 | orchestrator | 2025-05-30 00:58:19.426785 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-30 00:58:19.426797 | orchestrator | Friday 30 May 2025 00:46:58 +0000 (0:00:00.813) 0:01:31.427 ************ 2025-05-30 00:58:19.426804 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.426829 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.426836 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.426843 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.426849 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.426856 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.426862 | orchestrator | 2025-05-30 00:58:19.426869 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-30 00:58:19.426876 | orchestrator | Friday 30 May 2025 00:46:58 +0000 (0:00:00.659) 0:01:32.087 ************ 2025-05-30 00:58:19.426882 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.426889 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.426895 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.426902 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.426908 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.426936 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.426943 | orchestrator | 2025-05-30 00:58:19.426950 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-30 00:58:19.426956 | orchestrator | Friday 30 May 2025 00:47:00 +0000 (0:00:01.178) 0:01:33.266 ************ 2025-05-30 00:58:19.426963 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.426969 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.426976 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.426982 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.426989 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.426995 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.427003 | orchestrator | 2025-05-30 00:58:19.427011 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-30 00:58:19.427018 | orchestrator | Friday 30 May 2025 00:47:01 +0000 (0:00:00.918) 0:01:34.185 ************ 2025-05-30 00:58:19.427026 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.427034 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.427041 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.427048 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.427056 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.427063 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.427071 | orchestrator | 2025-05-30 00:58:19.427078 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-30 00:58:19.427087 | orchestrator | Friday 30 May 2025 00:47:02 +0000 (0:00:01.090) 0:01:35.275 ************ 2025-05-30 00:58:19.427094 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.427101 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.427109 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.427117 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.427124 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.427132 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.427139 | orchestrator | 2025-05-30 00:58:19.427147 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-30 00:58:19.427155 | orchestrator | Friday 30 May 2025 00:47:02 +0000 (0:00:00.625) 0:01:35.901 ************ 2025-05-30 00:58:19.427162 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.427169 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.427176 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.427184 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.427191 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.427198 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.427206 | orchestrator | 2025-05-30 00:58:19.427213 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-30 00:58:19.427226 | orchestrator | Friday 30 May 2025 00:47:03 +0000 (0:00:00.824) 0:01:36.725 ************ 2025-05-30 00:58:19.427239 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.427251 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.427258 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.427266 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.427273 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.427281 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.427289 | orchestrator | 2025-05-30 00:58:19.427297 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-30 00:58:19.427305 | orchestrator | Friday 30 May 2025 00:47:04 +0000 (0:00:00.599) 0:01:37.325 ************ 2025-05-30 00:58:19.427312 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.427320 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.427327 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.427335 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.427343 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.427350 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.427357 | orchestrator | 2025-05-30 00:58:19.427364 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-30 00:58:19.427370 | orchestrator | Friday 30 May 2025 00:47:05 +0000 (0:00:00.850) 0:01:38.176 ************ 2025-05-30 00:58:19.427377 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.427383 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.427390 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.427396 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.427403 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.427409 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.427416 | orchestrator | 2025-05-30 00:58:19.427422 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-30 00:58:19.427429 | orchestrator | Friday 30 May 2025 00:47:05 +0000 (0:00:00.861) 0:01:39.037 ************ 2025-05-30 00:58:19.427435 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-30 00:58:19.427442 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-30 00:58:19.427449 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.427455 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-30 00:58:19.427462 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-30 00:58:19.427468 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.427475 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-30 00:58:19.427481 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-30 00:58:19.427488 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.427494 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-30 00:58:19.427501 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-30 00:58:19.427507 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.427513 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-30 00:58:19.427520 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-30 00:58:19.427526 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.427533 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-30 00:58:19.427539 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-30 00:58:19.427546 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.427552 | orchestrator | 2025-05-30 00:58:19.427559 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-30 00:58:19.427565 | orchestrator | Friday 30 May 2025 00:47:07 +0000 (0:00:01.236) 0:01:40.274 ************ 2025-05-30 00:58:19.427572 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-30 00:58:19.427579 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-30 00:58:19.427585 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.427592 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-30 00:58:19.427598 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-30 00:58:19.427605 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.427616 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-30 00:58:19.427623 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-30 00:58:19.427629 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.427636 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-30 00:58:19.427642 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-30 00:58:19.427649 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.427655 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-30 00:58:19.427662 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-30 00:58:19.427668 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.427675 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-30 00:58:19.427681 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-30 00:58:19.427688 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.427694 | orchestrator | 2025-05-30 00:58:19.427701 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-30 00:58:19.427707 | orchestrator | Friday 30 May 2025 00:47:08 +0000 (0:00:00.875) 0:01:41.150 ************ 2025-05-30 00:58:19.427714 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.427720 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.427727 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.427733 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.427740 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.427746 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.427753 | orchestrator | 2025-05-30 00:58:19.427759 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-30 00:58:19.427766 | orchestrator | Friday 30 May 2025 00:47:09 +0000 (0:00:01.137) 0:01:42.288 ************ 2025-05-30 00:58:19.427772 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.427779 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.427785 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.427792 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.427798 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.427805 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.427812 | orchestrator | 2025-05-30 00:58:19.427822 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-30 00:58:19.427832 | orchestrator | Friday 30 May 2025 00:47:09 +0000 (0:00:00.697) 0:01:42.986 ************ 2025-05-30 00:58:19.427839 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.427846 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.427852 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.427859 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.427865 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.427872 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.427879 | orchestrator | 2025-05-30 00:58:19.427885 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-30 00:58:19.427892 | orchestrator | Friday 30 May 2025 00:47:10 +0000 (0:00:00.869) 0:01:43.855 ************ 2025-05-30 00:58:19.427899 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.427905 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.427932 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.427944 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.427954 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.427966 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.427977 | orchestrator | 2025-05-30 00:58:19.427988 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-30 00:58:19.427998 | orchestrator | Friday 30 May 2025 00:47:11 +0000 (0:00:00.643) 0:01:44.498 ************ 2025-05-30 00:58:19.428005 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.428011 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.428026 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.428033 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.428039 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.428046 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.428052 | orchestrator | 2025-05-30 00:58:19.428059 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-30 00:58:19.428065 | orchestrator | Friday 30 May 2025 00:47:12 +0000 (0:00:00.839) 0:01:45.338 ************ 2025-05-30 00:58:19.428072 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.428079 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.428085 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.428092 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.428098 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.428105 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.428111 | orchestrator | 2025-05-30 00:58:19.428118 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-30 00:58:19.428125 | orchestrator | Friday 30 May 2025 00:47:12 +0000 (0:00:00.625) 0:01:45.963 ************ 2025-05-30 00:58:19.428131 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.428138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.428144 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.428151 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.428157 | orchestrator | 2025-05-30 00:58:19.428164 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-30 00:58:19.428171 | orchestrator | Friday 30 May 2025 00:47:13 +0000 (0:00:00.893) 0:01:46.857 ************ 2025-05-30 00:58:19.428177 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.428184 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.428190 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.428197 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.428203 | orchestrator | 2025-05-30 00:58:19.428210 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-30 00:58:19.428217 | orchestrator | Friday 30 May 2025 00:47:14 +0000 (0:00:00.405) 0:01:47.263 ************ 2025-05-30 00:58:19.428223 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.428230 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.428236 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.428243 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.428249 | orchestrator | 2025-05-30 00:58:19.428256 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.428262 | orchestrator | Friday 30 May 2025 00:47:14 +0000 (0:00:00.395) 0:01:47.659 ************ 2025-05-30 00:58:19.428269 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.428276 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.428282 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.428289 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.428295 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.428302 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.428308 | orchestrator | 2025-05-30 00:58:19.428315 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-30 00:58:19.428322 | orchestrator | Friday 30 May 2025 00:47:15 +0000 (0:00:00.588) 0:01:48.247 ************ 2025-05-30 00:58:19.428328 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-30 00:58:19.428335 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.428341 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-30 00:58:19.428348 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.428354 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-30 00:58:19.428361 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.428367 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-30 00:58:19.428377 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.428384 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-30 00:58:19.428390 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.428397 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-30 00:58:19.428403 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.428410 | orchestrator | 2025-05-30 00:58:19.428416 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-30 00:58:19.428423 | orchestrator | Friday 30 May 2025 00:47:16 +0000 (0:00:01.091) 0:01:49.339 ************ 2025-05-30 00:58:19.428430 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.428436 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.428447 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.428454 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.428460 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.428471 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.428477 | orchestrator | 2025-05-30 00:58:19.428484 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.428491 | orchestrator | Friday 30 May 2025 00:47:16 +0000 (0:00:00.606) 0:01:49.946 ************ 2025-05-30 00:58:19.428497 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.428504 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.428510 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.428517 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.428523 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.428530 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.428536 | orchestrator | 2025-05-30 00:58:19.428543 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-30 00:58:19.428549 | orchestrator | Friday 30 May 2025 00:47:17 +0000 (0:00:00.814) 0:01:50.760 ************ 2025-05-30 00:58:19.428556 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-30 00:58:19.428562 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.428569 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-30 00:58:19.428575 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.428582 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-30 00:58:19.428589 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.428595 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-30 00:58:19.428602 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.428608 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-30 00:58:19.428615 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.428621 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-30 00:58:19.428628 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.428634 | orchestrator | 2025-05-30 00:58:19.428641 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-30 00:58:19.428648 | orchestrator | Friday 30 May 2025 00:47:18 +0000 (0:00:00.756) 0:01:51.517 ************ 2025-05-30 00:58:19.428654 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.428661 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.428667 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.428674 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.428680 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.428687 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.428694 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.428700 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.428707 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.428714 | orchestrator | 2025-05-30 00:58:19.428720 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-30 00:58:19.428731 | orchestrator | Friday 30 May 2025 00:47:19 +0000 (0:00:00.823) 0:01:52.340 ************ 2025-05-30 00:58:19.428738 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.428744 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.428751 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.428757 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.428764 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-30 00:58:19.428770 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-30 00:58:19.428777 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-30 00:58:19.428783 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.428790 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-30 00:58:19.428797 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-30 00:58:19.428817 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-30 00:58:19.428824 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.428830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.428837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.428843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.428850 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-30 00:58:19.428856 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-30 00:58:19.428863 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-30 00:58:19.428869 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.428876 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.428883 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-30 00:58:19.428889 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-30 00:58:19.428896 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-30 00:58:19.428902 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.428909 | orchestrator | 2025-05-30 00:58:19.428931 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-30 00:58:19.428937 | orchestrator | Friday 30 May 2025 00:47:20 +0000 (0:00:01.547) 0:01:53.888 ************ 2025-05-30 00:58:19.428944 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.428950 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.428957 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.428964 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.428970 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.428977 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.428983 | orchestrator | 2025-05-30 00:58:19.428990 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-30 00:58:19.429000 | orchestrator | Friday 30 May 2025 00:47:22 +0000 (0:00:01.304) 0:01:55.193 ************ 2025-05-30 00:58:19.429007 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.429017 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.429023 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.429030 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-30 00:58:19.429037 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.429043 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-30 00:58:19.429050 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.429056 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-30 00:58:19.429063 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.429069 | orchestrator | 2025-05-30 00:58:19.429076 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-30 00:58:19.429083 | orchestrator | Friday 30 May 2025 00:47:23 +0000 (0:00:01.510) 0:01:56.703 ************ 2025-05-30 00:58:19.429089 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.429100 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.429107 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.429113 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.429120 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.429126 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.429133 | orchestrator | 2025-05-30 00:58:19.429140 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-30 00:58:19.429146 | orchestrator | Friday 30 May 2025 00:47:24 +0000 (0:00:01.242) 0:01:57.946 ************ 2025-05-30 00:58:19.429153 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.429159 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.429166 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.429172 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.429179 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.429185 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.429192 | orchestrator | 2025-05-30 00:58:19.429198 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-30 00:58:19.429205 | orchestrator | Friday 30 May 2025 00:47:26 +0000 (0:00:01.525) 0:01:59.472 ************ 2025-05-30 00:58:19.429211 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.429218 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.429224 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.429231 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.429238 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.429244 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.429251 | orchestrator | 2025-05-30 00:58:19.429257 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-30 00:58:19.429264 | orchestrator | Friday 30 May 2025 00:47:27 +0000 (0:00:01.575) 0:02:01.047 ************ 2025-05-30 00:58:19.429270 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.429277 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.429283 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.429290 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.429296 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.429303 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.429309 | orchestrator | 2025-05-30 00:58:19.429316 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-30 00:58:19.429322 | orchestrator | Friday 30 May 2025 00:47:30 +0000 (0:00:02.268) 0:02:03.315 ************ 2025-05-30 00:58:19.429329 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.429336 | orchestrator | 2025-05-30 00:58:19.429343 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-30 00:58:19.429349 | orchestrator | Friday 30 May 2025 00:47:31 +0000 (0:00:01.179) 0:02:04.495 ************ 2025-05-30 00:58:19.429356 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.429362 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.429369 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.429375 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.429382 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.429388 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.429395 | orchestrator | 2025-05-30 00:58:19.429401 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-30 00:58:19.429408 | orchestrator | Friday 30 May 2025 00:47:31 +0000 (0:00:00.618) 0:02:05.113 ************ 2025-05-30 00:58:19.429414 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.429421 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.429427 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.429434 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.429440 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.429447 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.429454 | orchestrator | 2025-05-30 00:58:19.429465 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-30 00:58:19.429471 | orchestrator | Friday 30 May 2025 00:47:32 +0000 (0:00:00.859) 0:02:05.973 ************ 2025-05-30 00:58:19.429478 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-30 00:58:19.429485 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-30 00:58:19.429491 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-30 00:58:19.429498 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-30 00:58:19.429504 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-30 00:58:19.429511 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-30 00:58:19.429518 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-30 00:58:19.429524 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-30 00:58:19.429531 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-30 00:58:19.429540 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-30 00:58:19.429551 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-30 00:58:19.429558 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-30 00:58:19.429564 | orchestrator | 2025-05-30 00:58:19.429571 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-30 00:58:19.429578 | orchestrator | Friday 30 May 2025 00:47:34 +0000 (0:00:01.220) 0:02:07.194 ************ 2025-05-30 00:58:19.429584 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.429591 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.429597 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.429604 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.429610 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.429617 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.429624 | orchestrator | 2025-05-30 00:58:19.429630 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-30 00:58:19.429637 | orchestrator | Friday 30 May 2025 00:47:35 +0000 (0:00:01.344) 0:02:08.539 ************ 2025-05-30 00:58:19.429643 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.429650 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.429656 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.429663 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.429669 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.429676 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.429682 | orchestrator | 2025-05-30 00:58:19.429689 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-30 00:58:19.429696 | orchestrator | Friday 30 May 2025 00:47:36 +0000 (0:00:00.848) 0:02:09.387 ************ 2025-05-30 00:58:19.429702 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.429709 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.429715 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.429722 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.429728 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.429735 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.429741 | orchestrator | 2025-05-30 00:58:19.429748 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-30 00:58:19.429754 | orchestrator | Friday 30 May 2025 00:47:37 +0000 (0:00:01.005) 0:02:10.392 ************ 2025-05-30 00:58:19.429761 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.429768 | orchestrator | 2025-05-30 00:58:19.429774 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-30 00:58:19.429785 | orchestrator | Friday 30 May 2025 00:47:38 +0000 (0:00:01.278) 0:02:11.670 ************ 2025-05-30 00:58:19.429792 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.429798 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.429805 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.429812 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.429818 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.429825 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.429831 | orchestrator | 2025-05-30 00:58:19.429838 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-30 00:58:19.429844 | orchestrator | Friday 30 May 2025 00:48:24 +0000 (0:00:46.328) 0:02:57.999 ************ 2025-05-30 00:58:19.429851 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-30 00:58:19.429857 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-30 00:58:19.429864 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-30 00:58:19.429871 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.429877 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-30 00:58:19.429884 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-30 00:58:19.429890 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-30 00:58:19.429897 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.429903 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-30 00:58:19.429948 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-30 00:58:19.429957 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-30 00:58:19.429963 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.429970 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-30 00:58:19.429976 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-30 00:58:19.429983 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-30 00:58:19.429990 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.429996 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-30 00:58:19.430003 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-30 00:58:19.430009 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-30 00:58:19.430050 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.430056 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-30 00:58:19.430063 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-30 00:58:19.430070 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-30 00:58:19.430077 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.430083 | orchestrator | 2025-05-30 00:58:19.430090 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-30 00:58:19.430101 | orchestrator | Friday 30 May 2025 00:48:25 +0000 (0:00:00.962) 0:02:58.962 ************ 2025-05-30 00:58:19.430112 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.430119 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.430125 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.430132 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.430139 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.430145 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.430152 | orchestrator | 2025-05-30 00:58:19.430158 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-30 00:58:19.430165 | orchestrator | Friday 30 May 2025 00:48:26 +0000 (0:00:00.719) 0:02:59.681 ************ 2025-05-30 00:58:19.430177 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.430183 | orchestrator | 2025-05-30 00:58:19.430190 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-30 00:58:19.430197 | orchestrator | Friday 30 May 2025 00:48:26 +0000 (0:00:00.165) 0:02:59.847 ************ 2025-05-30 00:58:19.430203 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.430210 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.430216 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.430222 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.430228 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.430234 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.430240 | orchestrator | 2025-05-30 00:58:19.430246 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-30 00:58:19.430252 | orchestrator | Friday 30 May 2025 00:48:27 +0000 (0:00:00.863) 0:03:00.711 ************ 2025-05-30 00:58:19.430258 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.430265 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.430271 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.430277 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.430283 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.430289 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.430295 | orchestrator | 2025-05-30 00:58:19.430301 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-30 00:58:19.430307 | orchestrator | Friday 30 May 2025 00:48:28 +0000 (0:00:00.795) 0:03:01.506 ************ 2025-05-30 00:58:19.430313 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.430319 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.430325 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.430331 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.430337 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.430344 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.430350 | orchestrator | 2025-05-30 00:58:19.430356 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-30 00:58:19.430362 | orchestrator | Friday 30 May 2025 00:48:29 +0000 (0:00:01.080) 0:03:02.587 ************ 2025-05-30 00:58:19.430368 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.430374 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.430380 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.430386 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.430393 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.430399 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.430405 | orchestrator | 2025-05-30 00:58:19.430411 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-30 00:58:19.430417 | orchestrator | Friday 30 May 2025 00:48:31 +0000 (0:00:01.864) 0:03:04.452 ************ 2025-05-30 00:58:19.430423 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.430429 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.430435 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.430441 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.430447 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.430454 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.430460 | orchestrator | 2025-05-30 00:58:19.430466 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-30 00:58:19.430472 | orchestrator | Friday 30 May 2025 00:48:32 +0000 (0:00:00.971) 0:03:05.424 ************ 2025-05-30 00:58:19.430478 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.430486 | orchestrator | 2025-05-30 00:58:19.430492 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-30 00:58:19.430498 | orchestrator | Friday 30 May 2025 00:48:33 +0000 (0:00:01.247) 0:03:06.671 ************ 2025-05-30 00:58:19.430504 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.430510 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.430520 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.430526 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.430532 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.430539 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.430545 | orchestrator | 2025-05-30 00:58:19.430551 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-30 00:58:19.430557 | orchestrator | Friday 30 May 2025 00:48:34 +0000 (0:00:00.611) 0:03:07.283 ************ 2025-05-30 00:58:19.430563 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.430569 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.430575 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.430581 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.430587 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.430593 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.430599 | orchestrator | 2025-05-30 00:58:19.430605 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-30 00:58:19.430612 | orchestrator | Friday 30 May 2025 00:48:35 +0000 (0:00:00.879) 0:03:08.162 ************ 2025-05-30 00:58:19.430618 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.430624 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.430630 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.430636 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.430642 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.430648 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.430654 | orchestrator | 2025-05-30 00:58:19.430660 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-30 00:58:19.430666 | orchestrator | Friday 30 May 2025 00:48:35 +0000 (0:00:00.691) 0:03:08.854 ************ 2025-05-30 00:58:19.430672 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.430737 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.430746 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.430755 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.430761 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.430768 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.430774 | orchestrator | 2025-05-30 00:58:19.430780 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-30 00:58:19.430786 | orchestrator | Friday 30 May 2025 00:48:36 +0000 (0:00:00.936) 0:03:09.791 ************ 2025-05-30 00:58:19.430792 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.430798 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.430805 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.430811 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.430817 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.430823 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.430829 | orchestrator | 2025-05-30 00:58:19.430835 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-30 00:58:19.430841 | orchestrator | Friday 30 May 2025 00:48:37 +0000 (0:00:00.607) 0:03:10.398 ************ 2025-05-30 00:58:19.430847 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.430854 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.430860 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.430866 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.430872 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.430878 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.430884 | orchestrator | 2025-05-30 00:58:19.430890 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-30 00:58:19.430897 | orchestrator | Friday 30 May 2025 00:48:38 +0000 (0:00:01.021) 0:03:11.420 ************ 2025-05-30 00:58:19.430903 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.430909 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.430928 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.430934 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.430945 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.430951 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.430957 | orchestrator | 2025-05-30 00:58:19.430963 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-30 00:58:19.430970 | orchestrator | Friday 30 May 2025 00:48:38 +0000 (0:00:00.650) 0:03:12.070 ************ 2025-05-30 00:58:19.430976 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.430982 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.430988 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.430994 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.431001 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.431007 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.431013 | orchestrator | 2025-05-30 00:58:19.431019 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-30 00:58:19.431025 | orchestrator | Friday 30 May 2025 00:48:40 +0000 (0:00:01.301) 0:03:13.372 ************ 2025-05-30 00:58:19.431032 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.431038 | orchestrator | 2025-05-30 00:58:19.431044 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-30 00:58:19.431050 | orchestrator | Friday 30 May 2025 00:48:41 +0000 (0:00:01.319) 0:03:14.692 ************ 2025-05-30 00:58:19.431056 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-30 00:58:19.431063 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-30 00:58:19.431069 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-30 00:58:19.431075 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-30 00:58:19.431081 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-30 00:58:19.431087 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-30 00:58:19.431094 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-30 00:58:19.431100 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-30 00:58:19.431106 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-30 00:58:19.431112 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-30 00:58:19.431118 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-30 00:58:19.431124 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-30 00:58:19.431130 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-30 00:58:19.431136 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-30 00:58:19.431142 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-30 00:58:19.431149 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-30 00:58:19.431155 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-30 00:58:19.431161 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-30 00:58:19.431167 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-30 00:58:19.431173 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-30 00:58:19.431179 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-30 00:58:19.431185 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-30 00:58:19.431191 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-30 00:58:19.431197 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-30 00:58:19.431203 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-30 00:58:19.431209 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-30 00:58:19.431216 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-30 00:58:19.431222 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-30 00:58:19.431228 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-30 00:58:19.431241 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-30 00:58:19.431252 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-30 00:58:19.431264 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-30 00:58:19.431270 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-30 00:58:19.431276 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-30 00:58:19.431282 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-30 00:58:19.431288 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-30 00:58:19.431295 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-30 00:58:19.431301 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-30 00:58:19.431307 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-30 00:58:19.431313 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-30 00:58:19.431319 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-30 00:58:19.431326 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-30 00:58:19.431332 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-30 00:58:19.431338 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-30 00:58:19.431344 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-30 00:58:19.431350 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-30 00:58:19.431356 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-30 00:58:19.431362 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-30 00:58:19.431368 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-30 00:58:19.431374 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-30 00:58:19.431380 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-30 00:58:19.431387 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-30 00:58:19.431393 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-30 00:58:19.431399 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-30 00:58:19.431405 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-30 00:58:19.431411 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-30 00:58:19.431417 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-30 00:58:19.431423 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-30 00:58:19.431429 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-30 00:58:19.431435 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-30 00:58:19.431442 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-30 00:58:19.431448 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-30 00:58:19.431454 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-30 00:58:19.431460 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-30 00:58:19.431466 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-30 00:58:19.431472 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-30 00:58:19.431478 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-30 00:58:19.431484 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-30 00:58:19.431490 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-30 00:58:19.431501 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-30 00:58:19.431507 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-30 00:58:19.431514 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-30 00:58:19.431520 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-30 00:58:19.431526 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-30 00:58:19.431532 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-30 00:58:19.431538 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-30 00:58:19.431544 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-30 00:58:19.431550 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-30 00:58:19.431556 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-30 00:58:19.431563 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-30 00:58:19.431569 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-30 00:58:19.431575 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-30 00:58:19.431581 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-30 00:58:19.431587 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-30 00:58:19.431594 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-30 00:58:19.431600 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-30 00:58:19.431606 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-30 00:58:19.431615 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-30 00:58:19.431625 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-30 00:58:19.431631 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-30 00:58:19.431637 | orchestrator | 2025-05-30 00:58:19.431643 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-30 00:58:19.431650 | orchestrator | Friday 30 May 2025 00:48:47 +0000 (0:00:06.078) 0:03:20.770 ************ 2025-05-30 00:58:19.431656 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.431662 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.431668 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.431674 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.431680 | orchestrator | 2025-05-30 00:58:19.431687 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-30 00:58:19.431693 | orchestrator | Friday 30 May 2025 00:48:48 +0000 (0:00:01.189) 0:03:21.959 ************ 2025-05-30 00:58:19.431699 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-30 00:58:19.431706 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-30 00:58:19.431712 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-30 00:58:19.431718 | orchestrator | 2025-05-30 00:58:19.431725 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-30 00:58:19.431731 | orchestrator | Friday 30 May 2025 00:48:50 +0000 (0:00:01.266) 0:03:23.226 ************ 2025-05-30 00:58:19.431737 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-30 00:58:19.431743 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-30 00:58:19.431749 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-30 00:58:19.431760 | orchestrator | 2025-05-30 00:58:19.431766 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-30 00:58:19.431772 | orchestrator | Friday 30 May 2025 00:48:51 +0000 (0:00:01.179) 0:03:24.406 ************ 2025-05-30 00:58:19.431779 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.431785 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.431791 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.431797 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.431803 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.431809 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.431815 | orchestrator | 2025-05-30 00:58:19.431822 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-30 00:58:19.431828 | orchestrator | Friday 30 May 2025 00:48:52 +0000 (0:00:00.873) 0:03:25.279 ************ 2025-05-30 00:58:19.431834 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.431840 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.431846 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.431853 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.431859 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.431865 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.431871 | orchestrator | 2025-05-30 00:58:19.431877 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-30 00:58:19.431884 | orchestrator | Friday 30 May 2025 00:48:52 +0000 (0:00:00.647) 0:03:25.927 ************ 2025-05-30 00:58:19.431890 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.431896 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.431902 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.431908 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.431927 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.431933 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.431940 | orchestrator | 2025-05-30 00:58:19.431946 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-30 00:58:19.431952 | orchestrator | Friday 30 May 2025 00:48:53 +0000 (0:00:00.898) 0:03:26.826 ************ 2025-05-30 00:58:19.431958 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.431964 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.431970 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.431976 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.431982 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.431988 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.431994 | orchestrator | 2025-05-30 00:58:19.432001 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-30 00:58:19.432007 | orchestrator | Friday 30 May 2025 00:48:54 +0000 (0:00:00.677) 0:03:27.503 ************ 2025-05-30 00:58:19.432013 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432019 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432025 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432031 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.432037 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.432043 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.432050 | orchestrator | 2025-05-30 00:58:19.432056 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-30 00:58:19.432062 | orchestrator | Friday 30 May 2025 00:48:55 +0000 (0:00:00.884) 0:03:28.387 ************ 2025-05-30 00:58:19.432068 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432074 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432080 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432086 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.432093 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.432099 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.432105 | orchestrator | 2025-05-30 00:58:19.432118 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-30 00:58:19.432132 | orchestrator | Friday 30 May 2025 00:48:55 +0000 (0:00:00.678) 0:03:29.066 ************ 2025-05-30 00:58:19.432138 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432144 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432150 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432156 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.432162 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.432169 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.432175 | orchestrator | 2025-05-30 00:58:19.432181 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-30 00:58:19.432187 | orchestrator | Friday 30 May 2025 00:48:56 +0000 (0:00:00.848) 0:03:29.915 ************ 2025-05-30 00:58:19.432193 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432200 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432206 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432212 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.432218 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.432224 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.432230 | orchestrator | 2025-05-30 00:58:19.432236 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-30 00:58:19.432243 | orchestrator | Friday 30 May 2025 00:48:57 +0000 (0:00:00.615) 0:03:30.530 ************ 2025-05-30 00:58:19.432249 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432255 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432261 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432267 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.432273 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.432279 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.432285 | orchestrator | 2025-05-30 00:58:19.432292 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-30 00:58:19.432298 | orchestrator | Friday 30 May 2025 00:48:59 +0000 (0:00:02.275) 0:03:32.806 ************ 2025-05-30 00:58:19.432304 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432310 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432316 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432323 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.432329 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.432335 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.432341 | orchestrator | 2025-05-30 00:58:19.432347 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-30 00:58:19.432354 | orchestrator | Friday 30 May 2025 00:49:00 +0000 (0:00:00.856) 0:03:33.662 ************ 2025-05-30 00:58:19.432360 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-30 00:58:19.432366 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-30 00:58:19.432372 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432378 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-30 00:58:19.432385 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-30 00:58:19.432391 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432397 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-30 00:58:19.432403 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-30 00:58:19.432409 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432415 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-30 00:58:19.432421 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-30 00:58:19.432428 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.432434 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-30 00:58:19.432440 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-30 00:58:19.432446 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.432452 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-30 00:58:19.432458 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-30 00:58:19.432464 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.432475 | orchestrator | 2025-05-30 00:58:19.432481 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-30 00:58:19.432487 | orchestrator | Friday 30 May 2025 00:49:01 +0000 (0:00:01.003) 0:03:34.665 ************ 2025-05-30 00:58:19.432494 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-30 00:58:19.432500 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-30 00:58:19.432506 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432512 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-30 00:58:19.432518 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-30 00:58:19.432524 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432530 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-30 00:58:19.432537 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-30 00:58:19.432543 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432549 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-30 00:58:19.432555 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-30 00:58:19.432561 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-30 00:58:19.432567 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-30 00:58:19.432573 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-30 00:58:19.432579 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-30 00:58:19.432586 | orchestrator | 2025-05-30 00:58:19.432592 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-30 00:58:19.432598 | orchestrator | Friday 30 May 2025 00:49:02 +0000 (0:00:00.760) 0:03:35.425 ************ 2025-05-30 00:58:19.432604 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432610 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432616 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432622 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.432629 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.432635 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.432641 | orchestrator | 2025-05-30 00:58:19.432647 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-30 00:58:19.432656 | orchestrator | Friday 30 May 2025 00:49:03 +0000 (0:00:00.882) 0:03:36.307 ************ 2025-05-30 00:58:19.432666 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432672 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432678 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432684 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.432691 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.432697 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.432703 | orchestrator | 2025-05-30 00:58:19.432709 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-30 00:58:19.432715 | orchestrator | Friday 30 May 2025 00:49:03 +0000 (0:00:00.559) 0:03:36.867 ************ 2025-05-30 00:58:19.432722 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432728 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432734 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432740 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.432746 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.432752 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.432758 | orchestrator | 2025-05-30 00:58:19.432764 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-30 00:58:19.432771 | orchestrator | Friday 30 May 2025 00:49:04 +0000 (0:00:00.695) 0:03:37.563 ************ 2025-05-30 00:58:19.432777 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432783 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432789 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432795 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.432806 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.432812 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.432818 | orchestrator | 2025-05-30 00:58:19.432824 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-30 00:58:19.432830 | orchestrator | Friday 30 May 2025 00:49:04 +0000 (0:00:00.547) 0:03:38.110 ************ 2025-05-30 00:58:19.432837 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432843 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432849 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432855 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.432861 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.432867 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.432873 | orchestrator | 2025-05-30 00:58:19.432880 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-30 00:58:19.432886 | orchestrator | Friday 30 May 2025 00:49:05 +0000 (0:00:00.866) 0:03:38.976 ************ 2025-05-30 00:58:19.432892 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432898 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.432904 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.432923 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.432930 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.432936 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.432942 | orchestrator | 2025-05-30 00:58:19.432948 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-30 00:58:19.432954 | orchestrator | Friday 30 May 2025 00:49:06 +0000 (0:00:00.723) 0:03:39.699 ************ 2025-05-30 00:58:19.432961 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.432967 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.432973 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.432979 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.432985 | orchestrator | 2025-05-30 00:58:19.432991 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-30 00:58:19.432998 | orchestrator | Friday 30 May 2025 00:49:06 +0000 (0:00:00.382) 0:03:40.082 ************ 2025-05-30 00:58:19.433004 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.433010 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.433016 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.433022 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.433028 | orchestrator | 2025-05-30 00:58:19.433034 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-30 00:58:19.433041 | orchestrator | Friday 30 May 2025 00:49:07 +0000 (0:00:00.623) 0:03:40.705 ************ 2025-05-30 00:58:19.433047 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.433053 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.433059 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.433065 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.433071 | orchestrator | 2025-05-30 00:58:19.433077 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.433083 | orchestrator | Friday 30 May 2025 00:49:08 +0000 (0:00:00.849) 0:03:41.555 ************ 2025-05-30 00:58:19.433089 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.433095 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.433102 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.433108 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.433114 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.433120 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.433126 | orchestrator | 2025-05-30 00:58:19.433132 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-30 00:58:19.433138 | orchestrator | Friday 30 May 2025 00:49:09 +0000 (0:00:00.665) 0:03:42.220 ************ 2025-05-30 00:58:19.433149 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-30 00:58:19.433155 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.433161 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-30 00:58:19.433167 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-30 00:58:19.433173 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.433179 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.433185 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-30 00:58:19.433191 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-30 00:58:19.433198 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-30 00:58:19.433204 | orchestrator | 2025-05-30 00:58:19.433213 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-30 00:58:19.433223 | orchestrator | Friday 30 May 2025 00:49:10 +0000 (0:00:01.664) 0:03:43.885 ************ 2025-05-30 00:58:19.433229 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.433235 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.433241 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.433247 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.433253 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.433260 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.433266 | orchestrator | 2025-05-30 00:58:19.433272 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.433278 | orchestrator | Friday 30 May 2025 00:49:11 +0000 (0:00:00.720) 0:03:44.605 ************ 2025-05-30 00:58:19.433284 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.433290 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.433296 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.433302 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.433309 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.433315 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.433321 | orchestrator | 2025-05-30 00:58:19.433327 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-30 00:58:19.433333 | orchestrator | Friday 30 May 2025 00:49:12 +0000 (0:00:01.011) 0:03:45.617 ************ 2025-05-30 00:58:19.433339 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-30 00:58:19.433346 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-30 00:58:19.433352 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.433358 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.433364 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-30 00:58:19.433370 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.433376 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-30 00:58:19.433382 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.433388 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-30 00:58:19.433395 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.433401 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-30 00:58:19.433407 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.433413 | orchestrator | 2025-05-30 00:58:19.433419 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-30 00:58:19.433425 | orchestrator | Friday 30 May 2025 00:49:13 +0000 (0:00:01.126) 0:03:46.743 ************ 2025-05-30 00:58:19.433431 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.433437 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.433444 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.433450 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.433456 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.433462 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.433468 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.433475 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.433485 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.433492 | orchestrator | 2025-05-30 00:58:19.433498 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-30 00:58:19.433504 | orchestrator | Friday 30 May 2025 00:49:14 +0000 (0:00:00.827) 0:03:47.571 ************ 2025-05-30 00:58:19.433510 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.433516 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.433522 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.433528 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.433534 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-30 00:58:19.433541 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-30 00:58:19.433547 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-30 00:58:19.433553 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.433559 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-30 00:58:19.433565 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-30 00:58:19.433571 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-30 00:58:19.433577 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.433583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.433589 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-30 00:58:19.433595 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-30 00:58:19.433601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.433607 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-30 00:58:19.433614 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-30 00:58:19.433620 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.433626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.433632 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.433638 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-30 00:58:19.433644 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-30 00:58:19.433650 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.433656 | orchestrator | 2025-05-30 00:58:19.433663 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-30 00:58:19.433669 | orchestrator | Friday 30 May 2025 00:49:15 +0000 (0:00:01.512) 0:03:49.083 ************ 2025-05-30 00:58:19.433675 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.433681 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.433687 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.433693 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.433702 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.433709 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.433715 | orchestrator | 2025-05-30 00:58:19.433724 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-30 00:58:19.433731 | orchestrator | Friday 30 May 2025 00:49:20 +0000 (0:00:04.330) 0:03:53.414 ************ 2025-05-30 00:58:19.433737 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.433743 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.433749 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.433755 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.433761 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.433768 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.433774 | orchestrator | 2025-05-30 00:58:19.433780 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-30 00:58:19.433786 | orchestrator | Friday 30 May 2025 00:49:21 +0000 (0:00:01.003) 0:03:54.418 ************ 2025-05-30 00:58:19.433792 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.433803 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.433809 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.433815 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:19.433821 | orchestrator | 2025-05-30 00:58:19.433827 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-30 00:58:19.433834 | orchestrator | Friday 30 May 2025 00:49:22 +0000 (0:00:01.094) 0:03:55.513 ************ 2025-05-30 00:58:19.433840 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.433846 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.433852 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.433858 | orchestrator | 2025-05-30 00:58:19.433865 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-30 00:58:19.433871 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.433877 | orchestrator | 2025-05-30 00:58:19.433883 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-30 00:58:19.433889 | orchestrator | Friday 30 May 2025 00:49:23 +0000 (0:00:01.140) 0:03:56.653 ************ 2025-05-30 00:58:19.433895 | orchestrator | 2025-05-30 00:58:19.433902 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-30 00:58:19.433908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.433950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.433956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.433963 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.433969 | orchestrator | 2025-05-30 00:58:19.433975 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-30 00:58:19.433981 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.433987 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.433993 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.433999 | orchestrator | 2025-05-30 00:58:19.434006 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-30 00:58:19.434031 | orchestrator | Friday 30 May 2025 00:49:24 +0000 (0:00:01.302) 0:03:57.956 ************ 2025-05-30 00:58:19.434038 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 00:58:19.434045 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 00:58:19.434051 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 00:58:19.434057 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.434063 | orchestrator | 2025-05-30 00:58:19.434069 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-30 00:58:19.434075 | orchestrator | Friday 30 May 2025 00:49:25 +0000 (0:00:00.868) 0:03:58.824 ************ 2025-05-30 00:58:19.434081 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.434088 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.434094 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.434100 | orchestrator | 2025-05-30 00:58:19.434106 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-30 00:58:19.434113 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.434119 | orchestrator | 2025-05-30 00:58:19.434125 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-30 00:58:19.434131 | orchestrator | Friday 30 May 2025 00:49:26 +0000 (0:00:00.772) 0:03:59.596 ************ 2025-05-30 00:58:19.434137 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.434144 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.434150 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.434156 | orchestrator | 2025-05-30 00:58:19.434162 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-30 00:58:19.434168 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.434174 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.434180 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.434191 | orchestrator | 2025-05-30 00:58:19.434197 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-30 00:58:19.434203 | orchestrator | Friday 30 May 2025 00:49:27 +0000 (0:00:00.724) 0:04:00.320 ************ 2025-05-30 00:58:19.434209 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.434216 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.434222 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.434228 | orchestrator | 2025-05-30 00:58:19.434234 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-30 00:58:19.434240 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.434246 | orchestrator | 2025-05-30 00:58:19.434253 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-30 00:58:19.434259 | orchestrator | Friday 30 May 2025 00:49:27 +0000 (0:00:00.520) 0:04:00.841 ************ 2025-05-30 00:58:19.434265 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.434271 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.434277 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.434283 | orchestrator | 2025-05-30 00:58:19.434289 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-30 00:58:19.434295 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.434300 | orchestrator | 2025-05-30 00:58:19.434306 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-30 00:58:19.434319 | orchestrator | Friday 30 May 2025 00:49:28 +0000 (0:00:01.021) 0:04:01.862 ************ 2025-05-30 00:58:19.434329 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.434335 | orchestrator | 2025-05-30 00:58:19.434341 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-30 00:58:19.434346 | orchestrator | Friday 30 May 2025 00:49:28 +0000 (0:00:00.155) 0:04:02.018 ************ 2025-05-30 00:58:19.434351 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.434357 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.434362 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.434367 | orchestrator | 2025-05-30 00:58:19.434373 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-30 00:58:19.434378 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.434384 | orchestrator | 2025-05-30 00:58:19.434389 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-30 00:58:19.434394 | orchestrator | Friday 30 May 2025 00:49:29 +0000 (0:00:00.535) 0:04:02.553 ************ 2025-05-30 00:58:19.434400 | orchestrator | 2025-05-30 00:58:19.434405 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-30 00:58:19.434411 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.434416 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:19.434422 | orchestrator | 2025-05-30 00:58:19.434427 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-30 00:58:19.434433 | orchestrator | Friday 30 May 2025 00:49:30 +0000 (0:00:01.078) 0:04:03.632 ************ 2025-05-30 00:58:19.434438 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.434443 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.434449 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.434454 | orchestrator | 2025-05-30 00:58:19.434460 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-30 00:58:19.434465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.434470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.434476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.434481 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.434486 | orchestrator | 2025-05-30 00:58:19.434492 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-30 00:58:19.434497 | orchestrator | Friday 30 May 2025 00:49:31 +0000 (0:00:00.948) 0:04:04.581 ************ 2025-05-30 00:58:19.434507 | orchestrator | 2025-05-30 00:58:19.434513 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-30 00:58:19.434518 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.434524 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.434529 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.434534 | orchestrator | 2025-05-30 00:58:19.434540 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-30 00:58:19.434545 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.434550 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.434556 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.434561 | orchestrator | 2025-05-30 00:58:19.434567 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-30 00:58:19.434572 | orchestrator | Friday 30 May 2025 00:49:33 +0000 (0:00:01.627) 0:04:06.209 ************ 2025-05-30 00:58:19.434577 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 00:58:19.434583 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 00:58:19.434588 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 00:58:19.434593 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.434599 | orchestrator | 2025-05-30 00:58:19.434604 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-30 00:58:19.434609 | orchestrator | Friday 30 May 2025 00:49:33 +0000 (0:00:00.714) 0:04:06.923 ************ 2025-05-30 00:58:19.434615 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.434620 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.434626 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.434631 | orchestrator | 2025-05-30 00:58:19.434636 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-30 00:58:19.434642 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.434647 | orchestrator | 2025-05-30 00:58:19.434653 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-30 00:58:19.434658 | orchestrator | Friday 30 May 2025 00:49:34 +0000 (0:00:01.099) 0:04:08.023 ************ 2025-05-30 00:58:19.434663 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.434669 | orchestrator | 2025-05-30 00:58:19.434702 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-30 00:58:19.434708 | orchestrator | Friday 30 May 2025 00:49:35 +0000 (0:00:00.704) 0:04:08.728 ************ 2025-05-30 00:58:19.434713 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.434719 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.434724 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.434730 | orchestrator | 2025-05-30 00:58:19.434735 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-30 00:58:19.434741 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.434746 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.434751 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.434757 | orchestrator | 2025-05-30 00:58:19.434762 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-30 00:58:19.434768 | orchestrator | Friday 30 May 2025 00:49:37 +0000 (0:00:01.633) 0:04:10.361 ************ 2025-05-30 00:58:19.434773 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.434779 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.434784 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.434789 | orchestrator | 2025-05-30 00:58:19.434795 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-30 00:58:19.434800 | orchestrator | Friday 30 May 2025 00:49:38 +0000 (0:00:01.241) 0:04:11.603 ************ 2025-05-30 00:58:19.434806 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.434811 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.434817 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.434822 | orchestrator | 2025-05-30 00:58:19.434831 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-30 00:58:19.434843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.434849 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.434854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.434860 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.434865 | orchestrator | 2025-05-30 00:58:19.434871 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-30 00:58:19.434876 | orchestrator | Friday 30 May 2025 00:49:39 +0000 (0:00:01.310) 0:04:12.914 ************ 2025-05-30 00:58:19.434881 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.434887 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.434892 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.434898 | orchestrator | 2025-05-30 00:58:19.434903 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-30 00:58:19.434909 | orchestrator | Friday 30 May 2025 00:49:40 +0000 (0:00:00.990) 0:04:13.904 ************ 2025-05-30 00:58:19.434927 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.434932 | orchestrator | 2025-05-30 00:58:19.434937 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-30 00:58:19.434943 | orchestrator | Friday 30 May 2025 00:49:41 +0000 (0:00:00.475) 0:04:14.380 ************ 2025-05-30 00:58:19.434948 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.434954 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.434959 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.434964 | orchestrator | 2025-05-30 00:58:19.434970 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-30 00:58:19.434975 | orchestrator | Friday 30 May 2025 00:49:41 +0000 (0:00:00.291) 0:04:14.671 ************ 2025-05-30 00:58:19.434980 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.434986 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.434991 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.434997 | orchestrator | 2025-05-30 00:58:19.435002 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-30 00:58:19.435007 | orchestrator | Friday 30 May 2025 00:49:43 +0000 (0:00:01.554) 0:04:16.226 ************ 2025-05-30 00:58:19.435013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.435018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.435023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.435029 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.435034 | orchestrator | 2025-05-30 00:58:19.435040 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-30 00:58:19.435045 | orchestrator | Friday 30 May 2025 00:49:43 +0000 (0:00:00.761) 0:04:16.987 ************ 2025-05-30 00:58:19.435050 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.435056 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.435061 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.435066 | orchestrator | 2025-05-30 00:58:19.435072 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-30 00:58:19.435077 | orchestrator | Friday 30 May 2025 00:49:44 +0000 (0:00:00.358) 0:04:17.345 ************ 2025-05-30 00:58:19.435082 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.435088 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.435093 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.435098 | orchestrator | 2025-05-30 00:58:19.435104 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-30 00:58:19.435109 | orchestrator | Friday 30 May 2025 00:49:44 +0000 (0:00:00.327) 0:04:17.673 ************ 2025-05-30 00:58:19.435114 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.435120 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.435125 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.435130 | orchestrator | 2025-05-30 00:58:19.435136 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-30 00:58:19.435146 | orchestrator | Friday 30 May 2025 00:49:45 +0000 (0:00:00.515) 0:04:18.188 ************ 2025-05-30 00:58:19.435151 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.435157 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.435162 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.435167 | orchestrator | 2025-05-30 00:58:19.435173 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-30 00:58:19.435178 | orchestrator | Friday 30 May 2025 00:49:45 +0000 (0:00:00.344) 0:04:18.533 ************ 2025-05-30 00:58:19.435184 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.435189 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.435194 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.435200 | orchestrator | 2025-05-30 00:58:19.435205 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-30 00:58:19.435210 | orchestrator | 2025-05-30 00:58:19.435216 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-30 00:58:19.435221 | orchestrator | Friday 30 May 2025 00:49:47 +0000 (0:00:02.036) 0:04:20.569 ************ 2025-05-30 00:58:19.435227 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:19.435232 | orchestrator | 2025-05-30 00:58:19.435237 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-30 00:58:19.435243 | orchestrator | Friday 30 May 2025 00:49:48 +0000 (0:00:00.747) 0:04:21.317 ************ 2025-05-30 00:58:19.435248 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.435254 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.435259 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.435264 | orchestrator | 2025-05-30 00:58:19.435270 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-30 00:58:19.435275 | orchestrator | Friday 30 May 2025 00:49:48 +0000 (0:00:00.754) 0:04:22.071 ************ 2025-05-30 00:58:19.435281 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435286 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435291 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435297 | orchestrator | 2025-05-30 00:58:19.435306 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-30 00:58:19.435314 | orchestrator | Friday 30 May 2025 00:49:49 +0000 (0:00:00.353) 0:04:22.425 ************ 2025-05-30 00:58:19.435320 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435325 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435331 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435336 | orchestrator | 2025-05-30 00:58:19.435342 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-30 00:58:19.435347 | orchestrator | Friday 30 May 2025 00:49:49 +0000 (0:00:00.602) 0:04:23.028 ************ 2025-05-30 00:58:19.435352 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435358 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435363 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435368 | orchestrator | 2025-05-30 00:58:19.435374 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-30 00:58:19.435379 | orchestrator | Friday 30 May 2025 00:49:50 +0000 (0:00:00.340) 0:04:23.369 ************ 2025-05-30 00:58:19.435385 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.435390 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.435395 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.435401 | orchestrator | 2025-05-30 00:58:19.435406 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-30 00:58:19.435412 | orchestrator | Friday 30 May 2025 00:49:50 +0000 (0:00:00.718) 0:04:24.088 ************ 2025-05-30 00:58:19.435417 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435422 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435428 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435433 | orchestrator | 2025-05-30 00:58:19.435443 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-30 00:58:19.435448 | orchestrator | Friday 30 May 2025 00:49:51 +0000 (0:00:00.335) 0:04:24.423 ************ 2025-05-30 00:58:19.435454 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435459 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435464 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435470 | orchestrator | 2025-05-30 00:58:19.435475 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-30 00:58:19.435481 | orchestrator | Friday 30 May 2025 00:49:51 +0000 (0:00:00.656) 0:04:25.080 ************ 2025-05-30 00:58:19.435486 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435491 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435497 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435502 | orchestrator | 2025-05-30 00:58:19.435508 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-30 00:58:19.435513 | orchestrator | Friday 30 May 2025 00:49:52 +0000 (0:00:00.373) 0:04:25.453 ************ 2025-05-30 00:58:19.435518 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435524 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435529 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435535 | orchestrator | 2025-05-30 00:58:19.435540 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-30 00:58:19.435545 | orchestrator | Friday 30 May 2025 00:49:52 +0000 (0:00:00.370) 0:04:25.823 ************ 2025-05-30 00:58:19.435551 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435556 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435562 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435567 | orchestrator | 2025-05-30 00:58:19.435572 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-30 00:58:19.435578 | orchestrator | Friday 30 May 2025 00:49:53 +0000 (0:00:00.340) 0:04:26.163 ************ 2025-05-30 00:58:19.435583 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.435589 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.435594 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.435599 | orchestrator | 2025-05-30 00:58:19.435605 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-30 00:58:19.435610 | orchestrator | Friday 30 May 2025 00:49:54 +0000 (0:00:01.091) 0:04:27.255 ************ 2025-05-30 00:58:19.435616 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435621 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435626 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435632 | orchestrator | 2025-05-30 00:58:19.435637 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-30 00:58:19.435642 | orchestrator | Friday 30 May 2025 00:49:54 +0000 (0:00:00.373) 0:04:27.628 ************ 2025-05-30 00:58:19.435648 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.435653 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.435658 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.435664 | orchestrator | 2025-05-30 00:58:19.435669 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-30 00:58:19.435675 | orchestrator | Friday 30 May 2025 00:49:54 +0000 (0:00:00.358) 0:04:27.987 ************ 2025-05-30 00:58:19.435680 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435686 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435691 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435696 | orchestrator | 2025-05-30 00:58:19.435702 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-30 00:58:19.435707 | orchestrator | Friday 30 May 2025 00:49:55 +0000 (0:00:00.330) 0:04:28.317 ************ 2025-05-30 00:58:19.435712 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435718 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435723 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435728 | orchestrator | 2025-05-30 00:58:19.435734 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-30 00:58:19.435743 | orchestrator | Friday 30 May 2025 00:49:55 +0000 (0:00:00.667) 0:04:28.985 ************ 2025-05-30 00:58:19.435748 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435754 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435759 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435765 | orchestrator | 2025-05-30 00:58:19.435770 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-30 00:58:19.435775 | orchestrator | Friday 30 May 2025 00:49:56 +0000 (0:00:00.369) 0:04:29.355 ************ 2025-05-30 00:58:19.435781 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435786 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435791 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435797 | orchestrator | 2025-05-30 00:58:19.435805 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-30 00:58:19.435814 | orchestrator | Friday 30 May 2025 00:49:56 +0000 (0:00:00.346) 0:04:29.702 ************ 2025-05-30 00:58:19.435819 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435825 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435830 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435835 | orchestrator | 2025-05-30 00:58:19.435841 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-30 00:58:19.435846 | orchestrator | Friday 30 May 2025 00:49:56 +0000 (0:00:00.340) 0:04:30.042 ************ 2025-05-30 00:58:19.435852 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.435857 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.435862 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.435868 | orchestrator | 2025-05-30 00:58:19.435873 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-30 00:58:19.435879 | orchestrator | Friday 30 May 2025 00:49:57 +0000 (0:00:00.666) 0:04:30.709 ************ 2025-05-30 00:58:19.435884 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.435889 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.435895 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.435900 | orchestrator | 2025-05-30 00:58:19.435905 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-30 00:58:19.435941 | orchestrator | Friday 30 May 2025 00:49:57 +0000 (0:00:00.335) 0:04:31.045 ************ 2025-05-30 00:58:19.435948 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435953 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435958 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435964 | orchestrator | 2025-05-30 00:58:19.435969 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-30 00:58:19.435975 | orchestrator | Friday 30 May 2025 00:49:58 +0000 (0:00:00.318) 0:04:31.363 ************ 2025-05-30 00:58:19.435980 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.435986 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.435991 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.435996 | orchestrator | 2025-05-30 00:58:19.436002 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-30 00:58:19.436007 | orchestrator | Friday 30 May 2025 00:49:58 +0000 (0:00:00.320) 0:04:31.683 ************ 2025-05-30 00:58:19.436012 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436018 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436023 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436029 | orchestrator | 2025-05-30 00:58:19.436034 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-30 00:58:19.436039 | orchestrator | Friday 30 May 2025 00:49:59 +0000 (0:00:00.531) 0:04:32.214 ************ 2025-05-30 00:58:19.436045 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436050 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436056 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436061 | orchestrator | 2025-05-30 00:58:19.436066 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-30 00:58:19.436072 | orchestrator | Friday 30 May 2025 00:49:59 +0000 (0:00:00.317) 0:04:32.532 ************ 2025-05-30 00:58:19.436082 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436088 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436093 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436098 | orchestrator | 2025-05-30 00:58:19.436104 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-30 00:58:19.436109 | orchestrator | Friday 30 May 2025 00:49:59 +0000 (0:00:00.286) 0:04:32.818 ************ 2025-05-30 00:58:19.436114 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436120 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436125 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436130 | orchestrator | 2025-05-30 00:58:19.436136 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-30 00:58:19.436141 | orchestrator | Friday 30 May 2025 00:49:59 +0000 (0:00:00.291) 0:04:33.109 ************ 2025-05-30 00:58:19.436147 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436152 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436157 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436163 | orchestrator | 2025-05-30 00:58:19.436168 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-30 00:58:19.436174 | orchestrator | Friday 30 May 2025 00:50:00 +0000 (0:00:00.487) 0:04:33.596 ************ 2025-05-30 00:58:19.436179 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436183 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436188 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436193 | orchestrator | 2025-05-30 00:58:19.436198 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-30 00:58:19.436203 | orchestrator | Friday 30 May 2025 00:50:00 +0000 (0:00:00.295) 0:04:33.892 ************ 2025-05-30 00:58:19.436207 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436212 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436217 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436222 | orchestrator | 2025-05-30 00:58:19.436226 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-30 00:58:19.436231 | orchestrator | Friday 30 May 2025 00:50:01 +0000 (0:00:00.280) 0:04:34.172 ************ 2025-05-30 00:58:19.436236 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436241 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436246 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436250 | orchestrator | 2025-05-30 00:58:19.436255 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-30 00:58:19.436260 | orchestrator | Friday 30 May 2025 00:50:01 +0000 (0:00:00.292) 0:04:34.464 ************ 2025-05-30 00:58:19.436265 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436269 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436274 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436279 | orchestrator | 2025-05-30 00:58:19.436284 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-30 00:58:19.436288 | orchestrator | Friday 30 May 2025 00:50:01 +0000 (0:00:00.474) 0:04:34.939 ************ 2025-05-30 00:58:19.436297 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436302 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436310 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436314 | orchestrator | 2025-05-30 00:58:19.436319 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-30 00:58:19.436324 | orchestrator | Friday 30 May 2025 00:50:02 +0000 (0:00:00.335) 0:04:35.275 ************ 2025-05-30 00:58:19.436329 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-30 00:58:19.436334 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-30 00:58:19.436339 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436343 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-30 00:58:19.436348 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-30 00:58:19.436357 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436361 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-30 00:58:19.436366 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-30 00:58:19.436371 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436376 | orchestrator | 2025-05-30 00:58:19.436381 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-30 00:58:19.436385 | orchestrator | Friday 30 May 2025 00:50:02 +0000 (0:00:00.364) 0:04:35.640 ************ 2025-05-30 00:58:19.436390 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-30 00:58:19.436395 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-30 00:58:19.436400 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436405 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-30 00:58:19.436409 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-30 00:58:19.436414 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436419 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-30 00:58:19.436424 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-30 00:58:19.436429 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436433 | orchestrator | 2025-05-30 00:58:19.436438 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-30 00:58:19.436443 | orchestrator | Friday 30 May 2025 00:50:02 +0000 (0:00:00.327) 0:04:35.967 ************ 2025-05-30 00:58:19.436448 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436453 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436457 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436462 | orchestrator | 2025-05-30 00:58:19.436467 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-30 00:58:19.436472 | orchestrator | Friday 30 May 2025 00:50:03 +0000 (0:00:00.448) 0:04:36.416 ************ 2025-05-30 00:58:19.436476 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436481 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436486 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436491 | orchestrator | 2025-05-30 00:58:19.436496 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-30 00:58:19.436501 | orchestrator | Friday 30 May 2025 00:50:03 +0000 (0:00:00.282) 0:04:36.699 ************ 2025-05-30 00:58:19.436505 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436510 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436515 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436520 | orchestrator | 2025-05-30 00:58:19.436524 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-30 00:58:19.436529 | orchestrator | Friday 30 May 2025 00:50:03 +0000 (0:00:00.310) 0:04:37.009 ************ 2025-05-30 00:58:19.436534 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436539 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436544 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436548 | orchestrator | 2025-05-30 00:58:19.436553 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-30 00:58:19.436558 | orchestrator | Friday 30 May 2025 00:50:04 +0000 (0:00:00.346) 0:04:37.356 ************ 2025-05-30 00:58:19.436563 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436568 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436572 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436577 | orchestrator | 2025-05-30 00:58:19.436582 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-30 00:58:19.436587 | orchestrator | Friday 30 May 2025 00:50:04 +0000 (0:00:00.598) 0:04:37.955 ************ 2025-05-30 00:58:19.436591 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436596 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436601 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436609 | orchestrator | 2025-05-30 00:58:19.436614 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-30 00:58:19.436619 | orchestrator | Friday 30 May 2025 00:50:05 +0000 (0:00:00.359) 0:04:38.314 ************ 2025-05-30 00:58:19.436624 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.436628 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.436633 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.436638 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436643 | orchestrator | 2025-05-30 00:58:19.436647 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-30 00:58:19.436652 | orchestrator | Friday 30 May 2025 00:50:05 +0000 (0:00:00.454) 0:04:38.769 ************ 2025-05-30 00:58:19.436657 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.436662 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.436667 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.436671 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436676 | orchestrator | 2025-05-30 00:58:19.436681 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-30 00:58:19.436686 | orchestrator | Friday 30 May 2025 00:50:06 +0000 (0:00:00.466) 0:04:39.235 ************ 2025-05-30 00:58:19.436691 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.436698 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.436706 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.436711 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436716 | orchestrator | 2025-05-30 00:58:19.436721 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.436725 | orchestrator | Friday 30 May 2025 00:50:06 +0000 (0:00:00.453) 0:04:39.689 ************ 2025-05-30 00:58:19.436730 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436735 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436740 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436745 | orchestrator | 2025-05-30 00:58:19.436749 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-30 00:58:19.436754 | orchestrator | Friday 30 May 2025 00:50:07 +0000 (0:00:00.608) 0:04:40.298 ************ 2025-05-30 00:58:19.436759 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-30 00:58:19.436764 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436768 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-30 00:58:19.436773 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436778 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-30 00:58:19.436783 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436787 | orchestrator | 2025-05-30 00:58:19.436792 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-30 00:58:19.436797 | orchestrator | Friday 30 May 2025 00:50:07 +0000 (0:00:00.612) 0:04:40.911 ************ 2025-05-30 00:58:19.436802 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436807 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436812 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436816 | orchestrator | 2025-05-30 00:58:19.436821 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.436826 | orchestrator | Friday 30 May 2025 00:50:08 +0000 (0:00:00.354) 0:04:41.265 ************ 2025-05-30 00:58:19.436831 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436836 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436840 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436845 | orchestrator | 2025-05-30 00:58:19.436850 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-30 00:58:19.436855 | orchestrator | Friday 30 May 2025 00:50:08 +0000 (0:00:00.352) 0:04:41.618 ************ 2025-05-30 00:58:19.436863 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-30 00:58:19.436868 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436872 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-30 00:58:19.436877 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436882 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-30 00:58:19.436887 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436892 | orchestrator | 2025-05-30 00:58:19.436896 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-30 00:58:19.436901 | orchestrator | Friday 30 May 2025 00:50:09 +0000 (0:00:00.992) 0:04:42.610 ************ 2025-05-30 00:58:19.436906 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436922 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436927 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.436932 | orchestrator | 2025-05-30 00:58:19.436936 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-30 00:58:19.436941 | orchestrator | Friday 30 May 2025 00:50:09 +0000 (0:00:00.312) 0:04:42.923 ************ 2025-05-30 00:58:19.436946 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.436951 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.436956 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.436960 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.436965 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-30 00:58:19.436970 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-30 00:58:19.436975 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-30 00:58:19.436979 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.436984 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-30 00:58:19.436989 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-30 00:58:19.436994 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-30 00:58:19.436998 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.437003 | orchestrator | 2025-05-30 00:58:19.437008 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-30 00:58:19.437013 | orchestrator | Friday 30 May 2025 00:50:10 +0000 (0:00:00.537) 0:04:43.460 ************ 2025-05-30 00:58:19.437018 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.437023 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.437027 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.437032 | orchestrator | 2025-05-30 00:58:19.437037 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-30 00:58:19.437042 | orchestrator | Friday 30 May 2025 00:50:11 +0000 (0:00:00.720) 0:04:44.180 ************ 2025-05-30 00:58:19.437047 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.437051 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.437056 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.437061 | orchestrator | 2025-05-30 00:58:19.437066 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-30 00:58:19.437070 | orchestrator | Friday 30 May 2025 00:50:11 +0000 (0:00:00.486) 0:04:44.667 ************ 2025-05-30 00:58:19.437075 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.437080 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.437085 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.437090 | orchestrator | 2025-05-30 00:58:19.437095 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-30 00:58:19.437099 | orchestrator | Friday 30 May 2025 00:50:12 +0000 (0:00:00.787) 0:04:45.454 ************ 2025-05-30 00:58:19.437104 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.437109 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.437114 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.437118 | orchestrator | 2025-05-30 00:58:19.437126 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-30 00:58:19.437139 | orchestrator | Friday 30 May 2025 00:50:12 +0000 (0:00:00.560) 0:04:46.015 ************ 2025-05-30 00:58:19.437144 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.437148 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.437153 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.437158 | orchestrator | 2025-05-30 00:58:19.437163 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-30 00:58:19.437168 | orchestrator | Friday 30 May 2025 00:50:13 +0000 (0:00:00.491) 0:04:46.507 ************ 2025-05-30 00:58:19.437173 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:19.437177 | orchestrator | 2025-05-30 00:58:19.437182 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-30 00:58:19.437187 | orchestrator | Friday 30 May 2025 00:50:13 +0000 (0:00:00.585) 0:04:47.092 ************ 2025-05-30 00:58:19.437192 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.437197 | orchestrator | 2025-05-30 00:58:19.437201 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-30 00:58:19.437206 | orchestrator | Friday 30 May 2025 00:50:14 +0000 (0:00:00.153) 0:04:47.246 ************ 2025-05-30 00:58:19.437211 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-30 00:58:19.437216 | orchestrator | 2025-05-30 00:58:19.437221 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-30 00:58:19.437225 | orchestrator | Friday 30 May 2025 00:50:14 +0000 (0:00:00.624) 0:04:47.870 ************ 2025-05-30 00:58:19.437230 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.437235 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.437240 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.437245 | orchestrator | 2025-05-30 00:58:19.437250 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-30 00:58:19.437254 | orchestrator | Friday 30 May 2025 00:50:15 +0000 (0:00:00.470) 0:04:48.341 ************ 2025-05-30 00:58:19.437259 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.437264 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.437269 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.437274 | orchestrator | 2025-05-30 00:58:19.437278 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-30 00:58:19.437283 | orchestrator | Friday 30 May 2025 00:50:15 +0000 (0:00:00.392) 0:04:48.733 ************ 2025-05-30 00:58:19.437288 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.437293 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.437298 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.437302 | orchestrator | 2025-05-30 00:58:19.437307 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-30 00:58:19.437312 | orchestrator | Friday 30 May 2025 00:50:16 +0000 (0:00:01.195) 0:04:49.929 ************ 2025-05-30 00:58:19.437317 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.437322 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.437326 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.437331 | orchestrator | 2025-05-30 00:58:19.437336 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-30 00:58:19.437341 | orchestrator | Friday 30 May 2025 00:50:17 +0000 (0:00:00.764) 0:04:50.693 ************ 2025-05-30 00:58:19.437345 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.437350 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.437355 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.437360 | orchestrator | 2025-05-30 00:58:19.437364 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-30 00:58:19.437369 | orchestrator | Friday 30 May 2025 00:50:18 +0000 (0:00:00.954) 0:04:51.648 ************ 2025-05-30 00:58:19.437374 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.437379 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.437384 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.437388 | orchestrator | 2025-05-30 00:58:19.437393 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-30 00:58:19.437402 | orchestrator | Friday 30 May 2025 00:50:19 +0000 (0:00:00.677) 0:04:52.325 ************ 2025-05-30 00:58:19.437407 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.437412 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.437416 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.437421 | orchestrator | 2025-05-30 00:58:19.437426 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-30 00:58:19.437431 | orchestrator | Friday 30 May 2025 00:50:19 +0000 (0:00:00.371) 0:04:52.696 ************ 2025-05-30 00:58:19.437436 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.437440 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.437445 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.437450 | orchestrator | 2025-05-30 00:58:19.437455 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-30 00:58:19.437460 | orchestrator | Friday 30 May 2025 00:50:19 +0000 (0:00:00.350) 0:04:53.047 ************ 2025-05-30 00:58:19.437464 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.437469 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.437474 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.437479 | orchestrator | 2025-05-30 00:58:19.437484 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-30 00:58:19.437488 | orchestrator | Friday 30 May 2025 00:50:20 +0000 (0:00:00.628) 0:04:53.675 ************ 2025-05-30 00:58:19.437493 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.437498 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.437503 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.437507 | orchestrator | 2025-05-30 00:58:19.437512 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-30 00:58:19.437517 | orchestrator | Friday 30 May 2025 00:50:20 +0000 (0:00:00.365) 0:04:54.040 ************ 2025-05-30 00:58:19.437522 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.437527 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.437531 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.437536 | orchestrator | 2025-05-30 00:58:19.437541 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-30 00:58:19.437546 | orchestrator | Friday 30 May 2025 00:50:22 +0000 (0:00:01.236) 0:04:55.277 ************ 2025-05-30 00:58:19.437551 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.437558 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.437563 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.437568 | orchestrator | 2025-05-30 00:58:19.437576 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-30 00:58:19.437581 | orchestrator | Friday 30 May 2025 00:50:22 +0000 (0:00:00.455) 0:04:55.733 ************ 2025-05-30 00:58:19.437586 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:19.437591 | orchestrator | 2025-05-30 00:58:19.437595 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-30 00:58:19.437600 | orchestrator | Friday 30 May 2025 00:50:23 +0000 (0:00:00.506) 0:04:56.239 ************ 2025-05-30 00:58:19.437605 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.437610 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.437614 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.437619 | orchestrator | 2025-05-30 00:58:19.437624 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-30 00:58:19.437629 | orchestrator | Friday 30 May 2025 00:50:23 +0000 (0:00:00.286) 0:04:56.525 ************ 2025-05-30 00:58:19.437634 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.437638 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.437643 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.437648 | orchestrator | 2025-05-30 00:58:19.437653 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-30 00:58:19.437658 | orchestrator | Friday 30 May 2025 00:50:23 +0000 (0:00:00.418) 0:04:56.943 ************ 2025-05-30 00:58:19.437666 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:19.437671 | orchestrator | 2025-05-30 00:58:19.437675 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-30 00:58:19.437680 | orchestrator | Friday 30 May 2025 00:50:24 +0000 (0:00:00.548) 0:04:57.492 ************ 2025-05-30 00:58:19.437685 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.437690 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.437695 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.437699 | orchestrator | 2025-05-30 00:58:19.437704 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-30 00:58:19.437709 | orchestrator | Friday 30 May 2025 00:50:25 +0000 (0:00:01.183) 0:04:58.675 ************ 2025-05-30 00:58:19.437714 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.437719 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.437723 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.437728 | orchestrator | 2025-05-30 00:58:19.437733 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-30 00:58:19.437738 | orchestrator | Friday 30 May 2025 00:50:26 +0000 (0:00:01.322) 0:04:59.997 ************ 2025-05-30 00:58:19.437742 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.437747 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.437752 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.437757 | orchestrator | 2025-05-30 00:58:19.437762 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-30 00:58:19.437766 | orchestrator | Friday 30 May 2025 00:50:28 +0000 (0:00:01.839) 0:05:01.837 ************ 2025-05-30 00:58:19.437771 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.437776 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.437781 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.437785 | orchestrator | 2025-05-30 00:58:19.437790 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-30 00:58:19.437795 | orchestrator | Friday 30 May 2025 00:50:30 +0000 (0:00:01.762) 0:05:03.599 ************ 2025-05-30 00:58:19.437800 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:19.437805 | orchestrator | 2025-05-30 00:58:19.437809 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-30 00:58:19.437814 | orchestrator | Friday 30 May 2025 00:50:31 +0000 (0:00:00.872) 0:05:04.472 ************ 2025-05-30 00:58:19.437819 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-30 00:58:19.437824 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.437829 | orchestrator | 2025-05-30 00:58:19.437833 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-30 00:58:19.437838 | orchestrator | Friday 30 May 2025 00:50:52 +0000 (0:00:21.479) 0:05:25.952 ************ 2025-05-30 00:58:19.437843 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.437848 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.437853 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.437857 | orchestrator | 2025-05-30 00:58:19.437862 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-30 00:58:19.437867 | orchestrator | Friday 30 May 2025 00:51:00 +0000 (0:00:07.362) 0:05:33.315 ************ 2025-05-30 00:58:19.437872 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.437877 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.437881 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.437886 | orchestrator | 2025-05-30 00:58:19.437891 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-30 00:58:19.437896 | orchestrator | Friday 30 May 2025 00:51:01 +0000 (0:00:01.209) 0:05:34.525 ************ 2025-05-30 00:58:19.437901 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.437905 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.437926 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.437931 | orchestrator | 2025-05-30 00:58:19.437936 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-30 00:58:19.437941 | orchestrator | Friday 30 May 2025 00:51:02 +0000 (0:00:00.707) 0:05:35.233 ************ 2025-05-30 00:58:19.437946 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:19.437951 | orchestrator | 2025-05-30 00:58:19.437955 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-30 00:58:19.437963 | orchestrator | Friday 30 May 2025 00:51:02 +0000 (0:00:00.798) 0:05:36.032 ************ 2025-05-30 00:58:19.437968 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.437976 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.437981 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.437985 | orchestrator | 2025-05-30 00:58:19.437990 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-30 00:58:19.437995 | orchestrator | Friday 30 May 2025 00:51:03 +0000 (0:00:00.361) 0:05:36.393 ************ 2025-05-30 00:58:19.438000 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.438005 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.438010 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.438112 | orchestrator | 2025-05-30 00:58:19.438117 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-30 00:58:19.438123 | orchestrator | Friday 30 May 2025 00:51:04 +0000 (0:00:01.178) 0:05:37.572 ************ 2025-05-30 00:58:19.438127 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 00:58:19.438132 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 00:58:19.438137 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 00:58:19.438142 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438147 | orchestrator | 2025-05-30 00:58:19.438152 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-30 00:58:19.438156 | orchestrator | Friday 30 May 2025 00:51:05 +0000 (0:00:01.349) 0:05:38.922 ************ 2025-05-30 00:58:19.438161 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.438166 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.438171 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.438176 | orchestrator | 2025-05-30 00:58:19.438181 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-30 00:58:19.438185 | orchestrator | Friday 30 May 2025 00:51:06 +0000 (0:00:00.394) 0:05:39.316 ************ 2025-05-30 00:58:19.438190 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.438195 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.438200 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.438205 | orchestrator | 2025-05-30 00:58:19.438210 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-30 00:58:19.438215 | orchestrator | 2025-05-30 00:58:19.438219 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-30 00:58:19.438224 | orchestrator | Friday 30 May 2025 00:51:08 +0000 (0:00:02.189) 0:05:41.506 ************ 2025-05-30 00:58:19.438229 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:19.438234 | orchestrator | 2025-05-30 00:58:19.438239 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-30 00:58:19.438244 | orchestrator | Friday 30 May 2025 00:51:09 +0000 (0:00:00.864) 0:05:42.371 ************ 2025-05-30 00:58:19.438249 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.438254 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.438258 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.438263 | orchestrator | 2025-05-30 00:58:19.438268 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-30 00:58:19.438273 | orchestrator | Friday 30 May 2025 00:51:10 +0000 (0:00:00.774) 0:05:43.145 ************ 2025-05-30 00:58:19.438282 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438287 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438292 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438297 | orchestrator | 2025-05-30 00:58:19.438302 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-30 00:58:19.438307 | orchestrator | Friday 30 May 2025 00:51:10 +0000 (0:00:00.344) 0:05:43.489 ************ 2025-05-30 00:58:19.438311 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438316 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438321 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438326 | orchestrator | 2025-05-30 00:58:19.438331 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-30 00:58:19.438336 | orchestrator | Friday 30 May 2025 00:51:11 +0000 (0:00:00.790) 0:05:44.279 ************ 2025-05-30 00:58:19.438340 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438345 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438350 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438355 | orchestrator | 2025-05-30 00:58:19.438360 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-30 00:58:19.438365 | orchestrator | Friday 30 May 2025 00:51:11 +0000 (0:00:00.338) 0:05:44.618 ************ 2025-05-30 00:58:19.438369 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.438374 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.438379 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.438384 | orchestrator | 2025-05-30 00:58:19.438389 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-30 00:58:19.438393 | orchestrator | Friday 30 May 2025 00:51:12 +0000 (0:00:00.776) 0:05:45.394 ************ 2025-05-30 00:58:19.438398 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438403 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438408 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438412 | orchestrator | 2025-05-30 00:58:19.438417 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-30 00:58:19.438422 | orchestrator | Friday 30 May 2025 00:51:12 +0000 (0:00:00.366) 0:05:45.760 ************ 2025-05-30 00:58:19.438427 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438432 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438436 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438441 | orchestrator | 2025-05-30 00:58:19.438446 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-30 00:58:19.438451 | orchestrator | Friday 30 May 2025 00:51:13 +0000 (0:00:00.614) 0:05:46.375 ************ 2025-05-30 00:58:19.438456 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438460 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438465 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438470 | orchestrator | 2025-05-30 00:58:19.438474 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-30 00:58:19.438479 | orchestrator | Friday 30 May 2025 00:51:13 +0000 (0:00:00.353) 0:05:46.728 ************ 2025-05-30 00:58:19.438484 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438506 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438512 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438517 | orchestrator | 2025-05-30 00:58:19.438525 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-30 00:58:19.438530 | orchestrator | Friday 30 May 2025 00:51:13 +0000 (0:00:00.394) 0:05:47.123 ************ 2025-05-30 00:58:19.438535 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438539 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438544 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438549 | orchestrator | 2025-05-30 00:58:19.438554 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-30 00:58:19.438559 | orchestrator | Friday 30 May 2025 00:51:14 +0000 (0:00:00.323) 0:05:47.446 ************ 2025-05-30 00:58:19.438563 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.438573 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.438577 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.438582 | orchestrator | 2025-05-30 00:58:19.438587 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-30 00:58:19.438592 | orchestrator | Friday 30 May 2025 00:51:15 +0000 (0:00:01.010) 0:05:48.456 ************ 2025-05-30 00:58:19.438596 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438601 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438606 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438611 | orchestrator | 2025-05-30 00:58:19.438615 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-30 00:58:19.438620 | orchestrator | Friday 30 May 2025 00:51:15 +0000 (0:00:00.345) 0:05:48.802 ************ 2025-05-30 00:58:19.438625 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.438630 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.438634 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.438639 | orchestrator | 2025-05-30 00:58:19.438644 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-30 00:58:19.438649 | orchestrator | Friday 30 May 2025 00:51:16 +0000 (0:00:00.349) 0:05:49.152 ************ 2025-05-30 00:58:19.438654 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438658 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438663 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438668 | orchestrator | 2025-05-30 00:58:19.438672 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-30 00:58:19.438677 | orchestrator | Friday 30 May 2025 00:51:16 +0000 (0:00:00.366) 0:05:49.518 ************ 2025-05-30 00:58:19.438682 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438687 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438691 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438696 | orchestrator | 2025-05-30 00:58:19.438701 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-30 00:58:19.438706 | orchestrator | Friday 30 May 2025 00:51:17 +0000 (0:00:00.764) 0:05:50.283 ************ 2025-05-30 00:58:19.438710 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438715 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438720 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438724 | orchestrator | 2025-05-30 00:58:19.438729 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-30 00:58:19.438734 | orchestrator | Friday 30 May 2025 00:51:17 +0000 (0:00:00.382) 0:05:50.666 ************ 2025-05-30 00:58:19.438739 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438743 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438748 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438753 | orchestrator | 2025-05-30 00:58:19.438757 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-30 00:58:19.438762 | orchestrator | Friday 30 May 2025 00:51:17 +0000 (0:00:00.324) 0:05:50.990 ************ 2025-05-30 00:58:19.438767 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438772 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438777 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438782 | orchestrator | 2025-05-30 00:58:19.438787 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-30 00:58:19.438791 | orchestrator | Friday 30 May 2025 00:51:18 +0000 (0:00:00.304) 0:05:51.294 ************ 2025-05-30 00:58:19.438796 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.438801 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.438806 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.438811 | orchestrator | 2025-05-30 00:58:19.438816 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-30 00:58:19.438820 | orchestrator | Friday 30 May 2025 00:51:18 +0000 (0:00:00.587) 0:05:51.882 ************ 2025-05-30 00:58:19.438825 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.438830 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.438835 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.438843 | orchestrator | 2025-05-30 00:58:19.438848 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-30 00:58:19.438853 | orchestrator | Friday 30 May 2025 00:51:19 +0000 (0:00:00.343) 0:05:52.225 ************ 2025-05-30 00:58:19.438857 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438862 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438867 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438872 | orchestrator | 2025-05-30 00:58:19.438877 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-30 00:58:19.438882 | orchestrator | Friday 30 May 2025 00:51:19 +0000 (0:00:00.345) 0:05:52.571 ************ 2025-05-30 00:58:19.438886 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438891 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438896 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438901 | orchestrator | 2025-05-30 00:58:19.438906 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-30 00:58:19.438921 | orchestrator | Friday 30 May 2025 00:51:19 +0000 (0:00:00.332) 0:05:52.904 ************ 2025-05-30 00:58:19.438927 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438932 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438937 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438941 | orchestrator | 2025-05-30 00:58:19.438946 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-30 00:58:19.438951 | orchestrator | Friday 30 May 2025 00:51:20 +0000 (0:00:00.603) 0:05:53.507 ************ 2025-05-30 00:58:19.438956 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.438975 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.438981 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.438986 | orchestrator | 2025-05-30 00:58:19.438993 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-30 00:58:19.438998 | orchestrator | Friday 30 May 2025 00:51:20 +0000 (0:00:00.355) 0:05:53.863 ************ 2025-05-30 00:58:19.439003 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439008 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439012 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439017 | orchestrator | 2025-05-30 00:58:19.439022 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-30 00:58:19.439027 | orchestrator | Friday 30 May 2025 00:51:21 +0000 (0:00:00.343) 0:05:54.207 ************ 2025-05-30 00:58:19.439031 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439036 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439041 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439046 | orchestrator | 2025-05-30 00:58:19.439050 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-30 00:58:19.439055 | orchestrator | Friday 30 May 2025 00:51:21 +0000 (0:00:00.342) 0:05:54.549 ************ 2025-05-30 00:58:19.439060 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439065 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439070 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439074 | orchestrator | 2025-05-30 00:58:19.439079 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-30 00:58:19.439084 | orchestrator | Friday 30 May 2025 00:51:22 +0000 (0:00:00.616) 0:05:55.165 ************ 2025-05-30 00:58:19.439089 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439094 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439098 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439103 | orchestrator | 2025-05-30 00:58:19.439108 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-30 00:58:19.439113 | orchestrator | Friday 30 May 2025 00:51:22 +0000 (0:00:00.337) 0:05:55.503 ************ 2025-05-30 00:58:19.439118 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439122 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439127 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439136 | orchestrator | 2025-05-30 00:58:19.439141 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-30 00:58:19.439146 | orchestrator | Friday 30 May 2025 00:51:22 +0000 (0:00:00.363) 0:05:55.866 ************ 2025-05-30 00:58:19.439150 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439155 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439160 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439165 | orchestrator | 2025-05-30 00:58:19.439169 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-30 00:58:19.439174 | orchestrator | Friday 30 May 2025 00:51:23 +0000 (0:00:00.339) 0:05:56.205 ************ 2025-05-30 00:58:19.439179 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439184 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439188 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439193 | orchestrator | 2025-05-30 00:58:19.439198 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-30 00:58:19.439203 | orchestrator | Friday 30 May 2025 00:51:23 +0000 (0:00:00.574) 0:05:56.780 ************ 2025-05-30 00:58:19.439207 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439212 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439217 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439222 | orchestrator | 2025-05-30 00:58:19.439226 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-30 00:58:19.439231 | orchestrator | Friday 30 May 2025 00:51:23 +0000 (0:00:00.330) 0:05:57.110 ************ 2025-05-30 00:58:19.439236 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-30 00:58:19.439241 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-30 00:58:19.439245 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-30 00:58:19.439250 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-30 00:58:19.439255 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439260 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439264 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-30 00:58:19.439269 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-30 00:58:19.439274 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439279 | orchestrator | 2025-05-30 00:58:19.439283 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-30 00:58:19.439288 | orchestrator | Friday 30 May 2025 00:51:24 +0000 (0:00:00.428) 0:05:57.538 ************ 2025-05-30 00:58:19.439293 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-30 00:58:19.439298 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-30 00:58:19.439303 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439307 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-30 00:58:19.439312 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-30 00:58:19.439317 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439321 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-30 00:58:19.439326 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-30 00:58:19.439331 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439335 | orchestrator | 2025-05-30 00:58:19.439340 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-30 00:58:19.439345 | orchestrator | Friday 30 May 2025 00:51:24 +0000 (0:00:00.418) 0:05:57.957 ************ 2025-05-30 00:58:19.439350 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439354 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439359 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439364 | orchestrator | 2025-05-30 00:58:19.439369 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-30 00:58:19.439373 | orchestrator | Friday 30 May 2025 00:51:25 +0000 (0:00:00.635) 0:05:58.593 ************ 2025-05-30 00:58:19.439395 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439401 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439408 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439413 | orchestrator | 2025-05-30 00:58:19.439418 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-30 00:58:19.439423 | orchestrator | Friday 30 May 2025 00:51:25 +0000 (0:00:00.371) 0:05:58.965 ************ 2025-05-30 00:58:19.439428 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439432 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439437 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439442 | orchestrator | 2025-05-30 00:58:19.439447 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-30 00:58:19.439451 | orchestrator | Friday 30 May 2025 00:51:26 +0000 (0:00:00.381) 0:05:59.346 ************ 2025-05-30 00:58:19.439456 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439461 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439465 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439470 | orchestrator | 2025-05-30 00:58:19.439475 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-30 00:58:19.439480 | orchestrator | Friday 30 May 2025 00:51:26 +0000 (0:00:00.333) 0:05:59.680 ************ 2025-05-30 00:58:19.439485 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439489 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439494 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439499 | orchestrator | 2025-05-30 00:58:19.439503 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-30 00:58:19.439508 | orchestrator | Friday 30 May 2025 00:51:27 +0000 (0:00:00.595) 0:06:00.275 ************ 2025-05-30 00:58:19.439513 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439518 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439522 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439527 | orchestrator | 2025-05-30 00:58:19.439532 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-30 00:58:19.439537 | orchestrator | Friday 30 May 2025 00:51:27 +0000 (0:00:00.340) 0:06:00.616 ************ 2025-05-30 00:58:19.439541 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.439546 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.439551 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.439556 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439560 | orchestrator | 2025-05-30 00:58:19.439565 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-30 00:58:19.439570 | orchestrator | Friday 30 May 2025 00:51:27 +0000 (0:00:00.438) 0:06:01.054 ************ 2025-05-30 00:58:19.439574 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.439579 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.439584 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.439589 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439593 | orchestrator | 2025-05-30 00:58:19.439598 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-30 00:58:19.439603 | orchestrator | Friday 30 May 2025 00:51:28 +0000 (0:00:00.440) 0:06:01.495 ************ 2025-05-30 00:58:19.439607 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.439612 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.439617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.439622 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439626 | orchestrator | 2025-05-30 00:58:19.439631 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.439636 | orchestrator | Friday 30 May 2025 00:51:28 +0000 (0:00:00.480) 0:06:01.975 ************ 2025-05-30 00:58:19.439646 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439651 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439655 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439660 | orchestrator | 2025-05-30 00:58:19.439665 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-30 00:58:19.439670 | orchestrator | Friday 30 May 2025 00:51:29 +0000 (0:00:00.501) 0:06:02.477 ************ 2025-05-30 00:58:19.439675 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-30 00:58:19.439679 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439684 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-30 00:58:19.439689 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439693 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-30 00:58:19.439698 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439703 | orchestrator | 2025-05-30 00:58:19.439708 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-30 00:58:19.439713 | orchestrator | Friday 30 May 2025 00:51:29 +0000 (0:00:00.411) 0:06:02.889 ************ 2025-05-30 00:58:19.439717 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439722 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439727 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439732 | orchestrator | 2025-05-30 00:58:19.439736 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.439741 | orchestrator | Friday 30 May 2025 00:51:30 +0000 (0:00:00.273) 0:06:03.162 ************ 2025-05-30 00:58:19.439746 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439750 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439755 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439760 | orchestrator | 2025-05-30 00:58:19.439765 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-30 00:58:19.439769 | orchestrator | Friday 30 May 2025 00:51:30 +0000 (0:00:00.324) 0:06:03.487 ************ 2025-05-30 00:58:19.439774 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-30 00:58:19.439779 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439784 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-30 00:58:19.439788 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439793 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-30 00:58:19.439811 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439816 | orchestrator | 2025-05-30 00:58:19.439824 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-30 00:58:19.439829 | orchestrator | Friday 30 May 2025 00:51:31 +0000 (0:00:00.870) 0:06:04.358 ************ 2025-05-30 00:58:19.439834 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439839 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439843 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439848 | orchestrator | 2025-05-30 00:58:19.439853 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-30 00:58:19.439858 | orchestrator | Friday 30 May 2025 00:51:31 +0000 (0:00:00.294) 0:06:04.652 ************ 2025-05-30 00:58:19.439863 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.439868 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.439872 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.439877 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-30 00:58:19.439882 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-30 00:58:19.439887 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-30 00:58:19.439892 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439896 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439901 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-30 00:58:19.439906 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-30 00:58:19.439938 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-30 00:58:19.439947 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439952 | orchestrator | 2025-05-30 00:58:19.439957 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-30 00:58:19.439962 | orchestrator | Friday 30 May 2025 00:51:32 +0000 (0:00:00.614) 0:06:05.266 ************ 2025-05-30 00:58:19.439967 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.439971 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.439976 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.439981 | orchestrator | 2025-05-30 00:58:19.439986 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-30 00:58:19.439990 | orchestrator | Friday 30 May 2025 00:51:32 +0000 (0:00:00.638) 0:06:05.905 ************ 2025-05-30 00:58:19.439995 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.440000 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.440005 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.440009 | orchestrator | 2025-05-30 00:58:19.440014 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-30 00:58:19.440019 | orchestrator | Friday 30 May 2025 00:51:33 +0000 (0:00:00.499) 0:06:06.405 ************ 2025-05-30 00:58:19.440024 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.440028 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.440033 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.440038 | orchestrator | 2025-05-30 00:58:19.440043 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-30 00:58:19.440048 | orchestrator | Friday 30 May 2025 00:51:33 +0000 (0:00:00.630) 0:06:07.035 ************ 2025-05-30 00:58:19.440052 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.440057 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.440062 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.440067 | orchestrator | 2025-05-30 00:58:19.440071 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-30 00:58:19.440076 | orchestrator | Friday 30 May 2025 00:51:34 +0000 (0:00:00.479) 0:06:07.515 ************ 2025-05-30 00:58:19.440081 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 00:58:19.440086 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 00:58:19.440091 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 00:58:19.440096 | orchestrator | 2025-05-30 00:58:19.440100 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-30 00:58:19.440105 | orchestrator | Friday 30 May 2025 00:51:35 +0000 (0:00:00.796) 0:06:08.311 ************ 2025-05-30 00:58:19.440110 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:19.440115 | orchestrator | 2025-05-30 00:58:19.440120 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-30 00:58:19.440124 | orchestrator | Friday 30 May 2025 00:51:36 +0000 (0:00:00.840) 0:06:09.152 ************ 2025-05-30 00:58:19.440129 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.440134 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.440139 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.440144 | orchestrator | 2025-05-30 00:58:19.440149 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-30 00:58:19.440155 | orchestrator | Friday 30 May 2025 00:51:36 +0000 (0:00:00.670) 0:06:09.822 ************ 2025-05-30 00:58:19.440160 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.440165 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.440170 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.440175 | orchestrator | 2025-05-30 00:58:19.440180 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-30 00:58:19.440185 | orchestrator | Friday 30 May 2025 00:51:37 +0000 (0:00:00.341) 0:06:10.163 ************ 2025-05-30 00:58:19.440191 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-30 00:58:19.440211 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-30 00:58:19.440216 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-30 00:58:19.440221 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-30 00:58:19.440226 | orchestrator | 2025-05-30 00:58:19.440231 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-30 00:58:19.440237 | orchestrator | Friday 30 May 2025 00:51:45 +0000 (0:00:08.245) 0:06:18.409 ************ 2025-05-30 00:58:19.440242 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.440263 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.440269 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.440274 | orchestrator | 2025-05-30 00:58:19.440282 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-30 00:58:19.440296 | orchestrator | Friday 30 May 2025 00:51:45 +0000 (0:00:00.378) 0:06:18.788 ************ 2025-05-30 00:58:19.440301 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-30 00:58:19.440306 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-30 00:58:19.440311 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-30 00:58:19.440316 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-30 00:58:19.440321 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 00:58:19.440326 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 00:58:19.440330 | orchestrator | 2025-05-30 00:58:19.440335 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-30 00:58:19.440340 | orchestrator | Friday 30 May 2025 00:51:47 +0000 (0:00:02.076) 0:06:20.864 ************ 2025-05-30 00:58:19.440344 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-30 00:58:19.440349 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-30 00:58:19.440354 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-30 00:58:19.440359 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-30 00:58:19.440363 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-30 00:58:19.440368 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-30 00:58:19.440373 | orchestrator | 2025-05-30 00:58:19.440378 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-30 00:58:19.440383 | orchestrator | Friday 30 May 2025 00:51:48 +0000 (0:00:01.163) 0:06:22.028 ************ 2025-05-30 00:58:19.440387 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.440392 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.440397 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.440402 | orchestrator | 2025-05-30 00:58:19.440407 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-30 00:58:19.440411 | orchestrator | Friday 30 May 2025 00:51:49 +0000 (0:00:00.695) 0:06:22.724 ************ 2025-05-30 00:58:19.440416 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.440421 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.440426 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.440430 | orchestrator | 2025-05-30 00:58:19.440435 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-30 00:58:19.440440 | orchestrator | Friday 30 May 2025 00:51:49 +0000 (0:00:00.409) 0:06:23.133 ************ 2025-05-30 00:58:19.440445 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.440450 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.440454 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.440459 | orchestrator | 2025-05-30 00:58:19.440464 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-30 00:58:19.440469 | orchestrator | Friday 30 May 2025 00:51:50 +0000 (0:00:00.282) 0:06:23.415 ************ 2025-05-30 00:58:19.440473 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:19.440478 | orchestrator | 2025-05-30 00:58:19.440483 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-30 00:58:19.440491 | orchestrator | Friday 30 May 2025 00:51:50 +0000 (0:00:00.495) 0:06:23.911 ************ 2025-05-30 00:58:19.440495 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.440500 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.440504 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.440509 | orchestrator | 2025-05-30 00:58:19.440513 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-30 00:58:19.440518 | orchestrator | Friday 30 May 2025 00:51:51 +0000 (0:00:00.511) 0:06:24.422 ************ 2025-05-30 00:58:19.440522 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.440527 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.440531 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.440536 | orchestrator | 2025-05-30 00:58:19.440540 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-30 00:58:19.440545 | orchestrator | Friday 30 May 2025 00:51:51 +0000 (0:00:00.416) 0:06:24.839 ************ 2025-05-30 00:58:19.440549 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:19.440554 | orchestrator | 2025-05-30 00:58:19.440558 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-30 00:58:19.440563 | orchestrator | Friday 30 May 2025 00:51:52 +0000 (0:00:00.552) 0:06:25.392 ************ 2025-05-30 00:58:19.440567 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.440572 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.440576 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.440581 | orchestrator | 2025-05-30 00:58:19.440585 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-30 00:58:19.440590 | orchestrator | Friday 30 May 2025 00:51:53 +0000 (0:00:01.513) 0:06:26.905 ************ 2025-05-30 00:58:19.440594 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.440599 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.440603 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.440608 | orchestrator | 2025-05-30 00:58:19.440612 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-30 00:58:19.440617 | orchestrator | Friday 30 May 2025 00:51:54 +0000 (0:00:01.142) 0:06:28.048 ************ 2025-05-30 00:58:19.440622 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.440626 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.440631 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.440635 | orchestrator | 2025-05-30 00:58:19.440640 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-30 00:58:19.440644 | orchestrator | Friday 30 May 2025 00:51:56 +0000 (0:00:01.651) 0:06:29.700 ************ 2025-05-30 00:58:19.440649 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.440653 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.440658 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.440662 | orchestrator | 2025-05-30 00:58:19.440681 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-30 00:58:19.440689 | orchestrator | Friday 30 May 2025 00:51:58 +0000 (0:00:02.130) 0:06:31.830 ************ 2025-05-30 00:58:19.440694 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.440698 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.440703 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-30 00:58:19.440707 | orchestrator | 2025-05-30 00:58:19.440712 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-30 00:58:19.440716 | orchestrator | Friday 30 May 2025 00:51:59 +0000 (0:00:00.561) 0:06:32.393 ************ 2025-05-30 00:58:19.440721 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-30 00:58:19.440725 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-30 00:58:19.440730 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-30 00:58:19.440738 | orchestrator | 2025-05-30 00:58:19.440743 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-30 00:58:19.440747 | orchestrator | Friday 30 May 2025 00:52:12 +0000 (0:00:13.300) 0:06:45.693 ************ 2025-05-30 00:58:19.440752 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-30 00:58:19.440756 | orchestrator | 2025-05-30 00:58:19.440761 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-30 00:58:19.440765 | orchestrator | Friday 30 May 2025 00:52:14 +0000 (0:00:01.872) 0:06:47.565 ************ 2025-05-30 00:58:19.440770 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.440774 | orchestrator | 2025-05-30 00:58:19.440779 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-30 00:58:19.440783 | orchestrator | Friday 30 May 2025 00:52:14 +0000 (0:00:00.446) 0:06:48.012 ************ 2025-05-30 00:58:19.440788 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.440792 | orchestrator | 2025-05-30 00:58:19.440797 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-30 00:58:19.440801 | orchestrator | Friday 30 May 2025 00:52:15 +0000 (0:00:00.311) 0:06:48.324 ************ 2025-05-30 00:58:19.440806 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-30 00:58:19.440811 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-30 00:58:19.440815 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-30 00:58:19.440820 | orchestrator | 2025-05-30 00:58:19.440824 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-30 00:58:19.440829 | orchestrator | Friday 30 May 2025 00:52:21 +0000 (0:00:06.492) 0:06:54.816 ************ 2025-05-30 00:58:19.440833 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-30 00:58:19.440838 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-30 00:58:19.440842 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-30 00:58:19.440847 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-30 00:58:19.440851 | orchestrator | 2025-05-30 00:58:19.440856 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-30 00:58:19.440860 | orchestrator | Friday 30 May 2025 00:52:27 +0000 (0:00:05.727) 0:07:00.544 ************ 2025-05-30 00:58:19.440865 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.440869 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.440874 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.440878 | orchestrator | 2025-05-30 00:58:19.440883 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-30 00:58:19.440887 | orchestrator | Friday 30 May 2025 00:52:28 +0000 (0:00:00.794) 0:07:01.339 ************ 2025-05-30 00:58:19.440892 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:19.440896 | orchestrator | 2025-05-30 00:58:19.440901 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-30 00:58:19.440905 | orchestrator | Friday 30 May 2025 00:52:28 +0000 (0:00:00.794) 0:07:02.134 ************ 2025-05-30 00:58:19.440923 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.440928 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.440932 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.440937 | orchestrator | 2025-05-30 00:58:19.440942 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-30 00:58:19.440946 | orchestrator | Friday 30 May 2025 00:52:29 +0000 (0:00:00.334) 0:07:02.469 ************ 2025-05-30 00:58:19.440951 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.440955 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.440960 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.440964 | orchestrator | 2025-05-30 00:58:19.440969 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-30 00:58:19.440976 | orchestrator | Friday 30 May 2025 00:52:30 +0000 (0:00:01.486) 0:07:03.955 ************ 2025-05-30 00:58:19.440981 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 00:58:19.440986 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 00:58:19.440990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 00:58:19.440995 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.440999 | orchestrator | 2025-05-30 00:58:19.441004 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-30 00:58:19.441008 | orchestrator | Friday 30 May 2025 00:52:31 +0000 (0:00:00.670) 0:07:04.626 ************ 2025-05-30 00:58:19.441013 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.441017 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.441022 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.441026 | orchestrator | 2025-05-30 00:58:19.441044 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-30 00:58:19.441052 | orchestrator | Friday 30 May 2025 00:52:31 +0000 (0:00:00.378) 0:07:05.004 ************ 2025-05-30 00:58:19.441056 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.441061 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.441065 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.441070 | orchestrator | 2025-05-30 00:58:19.441074 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-30 00:58:19.441079 | orchestrator | 2025-05-30 00:58:19.441084 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-30 00:58:19.441088 | orchestrator | Friday 30 May 2025 00:52:34 +0000 (0:00:02.237) 0:07:07.242 ************ 2025-05-30 00:58:19.441093 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.441097 | orchestrator | 2025-05-30 00:58:19.441102 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-30 00:58:19.441106 | orchestrator | Friday 30 May 2025 00:52:34 +0000 (0:00:00.816) 0:07:08.058 ************ 2025-05-30 00:58:19.441111 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441115 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441120 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441124 | orchestrator | 2025-05-30 00:58:19.441129 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-30 00:58:19.441133 | orchestrator | Friday 30 May 2025 00:52:35 +0000 (0:00:00.305) 0:07:08.364 ************ 2025-05-30 00:58:19.441138 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.441142 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.441147 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.441151 | orchestrator | 2025-05-30 00:58:19.441156 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-30 00:58:19.441160 | orchestrator | Friday 30 May 2025 00:52:35 +0000 (0:00:00.748) 0:07:09.113 ************ 2025-05-30 00:58:19.441165 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.441169 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.441174 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.441178 | orchestrator | 2025-05-30 00:58:19.441183 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-30 00:58:19.441187 | orchestrator | Friday 30 May 2025 00:52:37 +0000 (0:00:01.024) 0:07:10.138 ************ 2025-05-30 00:58:19.441192 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.441196 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.441201 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.441205 | orchestrator | 2025-05-30 00:58:19.441210 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-30 00:58:19.441215 | orchestrator | Friday 30 May 2025 00:52:37 +0000 (0:00:00.773) 0:07:10.912 ************ 2025-05-30 00:58:19.441219 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441224 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441228 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441236 | orchestrator | 2025-05-30 00:58:19.441240 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-30 00:58:19.441245 | orchestrator | Friday 30 May 2025 00:52:38 +0000 (0:00:00.328) 0:07:11.240 ************ 2025-05-30 00:58:19.441250 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441254 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441259 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441263 | orchestrator | 2025-05-30 00:58:19.441268 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-30 00:58:19.441272 | orchestrator | Friday 30 May 2025 00:52:38 +0000 (0:00:00.615) 0:07:11.855 ************ 2025-05-30 00:58:19.441277 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441281 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441286 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441290 | orchestrator | 2025-05-30 00:58:19.441295 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-30 00:58:19.441299 | orchestrator | Friday 30 May 2025 00:52:39 +0000 (0:00:00.323) 0:07:12.179 ************ 2025-05-30 00:58:19.441304 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441308 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441313 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441317 | orchestrator | 2025-05-30 00:58:19.441322 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-30 00:58:19.441326 | orchestrator | Friday 30 May 2025 00:52:39 +0000 (0:00:00.339) 0:07:12.518 ************ 2025-05-30 00:58:19.441331 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441335 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441340 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441344 | orchestrator | 2025-05-30 00:58:19.441349 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-30 00:58:19.441353 | orchestrator | Friday 30 May 2025 00:52:39 +0000 (0:00:00.315) 0:07:12.834 ************ 2025-05-30 00:58:19.441358 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441362 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441367 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441371 | orchestrator | 2025-05-30 00:58:19.441376 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-30 00:58:19.441380 | orchestrator | Friday 30 May 2025 00:52:40 +0000 (0:00:00.327) 0:07:13.162 ************ 2025-05-30 00:58:19.441385 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.441389 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.441394 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.441398 | orchestrator | 2025-05-30 00:58:19.441436 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-30 00:58:19.441443 | orchestrator | Friday 30 May 2025 00:52:41 +0000 (0:00:01.031) 0:07:14.194 ************ 2025-05-30 00:58:19.441450 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441457 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441465 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441472 | orchestrator | 2025-05-30 00:58:19.441478 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-30 00:58:19.441490 | orchestrator | Friday 30 May 2025 00:52:41 +0000 (0:00:00.363) 0:07:14.557 ************ 2025-05-30 00:58:19.441499 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441506 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441538 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441546 | orchestrator | 2025-05-30 00:58:19.441557 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-30 00:58:19.441565 | orchestrator | Friday 30 May 2025 00:52:41 +0000 (0:00:00.335) 0:07:14.892 ************ 2025-05-30 00:58:19.441572 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.441578 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.441586 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.441593 | orchestrator | 2025-05-30 00:58:19.441600 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-30 00:58:19.441614 | orchestrator | Friday 30 May 2025 00:52:42 +0000 (0:00:00.633) 0:07:15.526 ************ 2025-05-30 00:58:19.441621 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.441628 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.441632 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.441637 | orchestrator | 2025-05-30 00:58:19.441641 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-30 00:58:19.441646 | orchestrator | Friday 30 May 2025 00:52:42 +0000 (0:00:00.395) 0:07:15.921 ************ 2025-05-30 00:58:19.441650 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.441655 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.441659 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.441664 | orchestrator | 2025-05-30 00:58:19.441668 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-30 00:58:19.441673 | orchestrator | Friday 30 May 2025 00:52:43 +0000 (0:00:00.346) 0:07:16.268 ************ 2025-05-30 00:58:19.441677 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441682 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441686 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441691 | orchestrator | 2025-05-30 00:58:19.441695 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-30 00:58:19.441700 | orchestrator | Friday 30 May 2025 00:52:43 +0000 (0:00:00.359) 0:07:16.628 ************ 2025-05-30 00:58:19.441704 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441709 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441713 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441718 | orchestrator | 2025-05-30 00:58:19.441722 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-30 00:58:19.441727 | orchestrator | Friday 30 May 2025 00:52:44 +0000 (0:00:00.691) 0:07:17.319 ************ 2025-05-30 00:58:19.441731 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441736 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441740 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441745 | orchestrator | 2025-05-30 00:58:19.441749 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-30 00:58:19.441754 | orchestrator | Friday 30 May 2025 00:52:44 +0000 (0:00:00.359) 0:07:17.679 ************ 2025-05-30 00:58:19.441758 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.441763 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.441767 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.441772 | orchestrator | 2025-05-30 00:58:19.441776 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-30 00:58:19.441781 | orchestrator | Friday 30 May 2025 00:52:44 +0000 (0:00:00.355) 0:07:18.035 ************ 2025-05-30 00:58:19.441785 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441790 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441794 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441799 | orchestrator | 2025-05-30 00:58:19.441803 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-30 00:58:19.441808 | orchestrator | Friday 30 May 2025 00:52:45 +0000 (0:00:00.345) 0:07:18.380 ************ 2025-05-30 00:58:19.441812 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441817 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441821 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441826 | orchestrator | 2025-05-30 00:58:19.441830 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-30 00:58:19.441835 | orchestrator | Friday 30 May 2025 00:52:45 +0000 (0:00:00.643) 0:07:19.023 ************ 2025-05-30 00:58:19.441839 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441844 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441848 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441853 | orchestrator | 2025-05-30 00:58:19.441857 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-30 00:58:19.441866 | orchestrator | Friday 30 May 2025 00:52:46 +0000 (0:00:00.344) 0:07:19.367 ************ 2025-05-30 00:58:19.441870 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441875 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441879 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441884 | orchestrator | 2025-05-30 00:58:19.441888 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-30 00:58:19.441893 | orchestrator | Friday 30 May 2025 00:52:46 +0000 (0:00:00.354) 0:07:19.722 ************ 2025-05-30 00:58:19.441898 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441902 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441907 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441940 | orchestrator | 2025-05-30 00:58:19.441946 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-30 00:58:19.441950 | orchestrator | Friday 30 May 2025 00:52:46 +0000 (0:00:00.344) 0:07:20.067 ************ 2025-05-30 00:58:19.441955 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441959 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441964 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441968 | orchestrator | 2025-05-30 00:58:19.441973 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-30 00:58:19.441977 | orchestrator | Friday 30 May 2025 00:52:47 +0000 (0:00:00.608) 0:07:20.675 ************ 2025-05-30 00:58:19.441982 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.441987 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.441991 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.441995 | orchestrator | 2025-05-30 00:58:19.441999 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-30 00:58:19.442003 | orchestrator | Friday 30 May 2025 00:52:47 +0000 (0:00:00.353) 0:07:21.029 ************ 2025-05-30 00:58:19.442007 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442028 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442050 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442055 | orchestrator | 2025-05-30 00:58:19.442063 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-30 00:58:19.442067 | orchestrator | Friday 30 May 2025 00:52:48 +0000 (0:00:00.344) 0:07:21.373 ************ 2025-05-30 00:58:19.442071 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442075 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442079 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442083 | orchestrator | 2025-05-30 00:58:19.442088 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-30 00:58:19.442092 | orchestrator | Friday 30 May 2025 00:52:48 +0000 (0:00:00.332) 0:07:21.706 ************ 2025-05-30 00:58:19.442096 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442100 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442104 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442108 | orchestrator | 2025-05-30 00:58:19.442112 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-30 00:58:19.442116 | orchestrator | Friday 30 May 2025 00:52:49 +0000 (0:00:00.604) 0:07:22.311 ************ 2025-05-30 00:58:19.442120 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442124 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442128 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442132 | orchestrator | 2025-05-30 00:58:19.442136 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-30 00:58:19.442141 | orchestrator | Friday 30 May 2025 00:52:49 +0000 (0:00:00.350) 0:07:22.661 ************ 2025-05-30 00:58:19.442145 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442149 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442153 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442157 | orchestrator | 2025-05-30 00:58:19.442161 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-30 00:58:19.442172 | orchestrator | Friday 30 May 2025 00:52:49 +0000 (0:00:00.331) 0:07:22.992 ************ 2025-05-30 00:58:19.442176 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-30 00:58:19.442180 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-30 00:58:19.442184 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442188 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-30 00:58:19.442192 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-30 00:58:19.442196 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442200 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-30 00:58:19.442204 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-30 00:58:19.442208 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442212 | orchestrator | 2025-05-30 00:58:19.442217 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-30 00:58:19.442221 | orchestrator | Friday 30 May 2025 00:52:50 +0000 (0:00:00.375) 0:07:23.368 ************ 2025-05-30 00:58:19.442225 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-30 00:58:19.442229 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-30 00:58:19.442233 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-30 00:58:19.442237 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-30 00:58:19.442241 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442245 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442249 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-30 00:58:19.442253 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-30 00:58:19.442257 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442261 | orchestrator | 2025-05-30 00:58:19.442265 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-30 00:58:19.442269 | orchestrator | Friday 30 May 2025 00:52:50 +0000 (0:00:00.648) 0:07:24.016 ************ 2025-05-30 00:58:19.442274 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442278 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442282 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442286 | orchestrator | 2025-05-30 00:58:19.442290 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-30 00:58:19.442294 | orchestrator | Friday 30 May 2025 00:52:51 +0000 (0:00:00.332) 0:07:24.349 ************ 2025-05-30 00:58:19.442298 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442302 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442306 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442310 | orchestrator | 2025-05-30 00:58:19.442314 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-30 00:58:19.442319 | orchestrator | Friday 30 May 2025 00:52:51 +0000 (0:00:00.327) 0:07:24.677 ************ 2025-05-30 00:58:19.442323 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442327 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442331 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442335 | orchestrator | 2025-05-30 00:58:19.442339 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-30 00:58:19.442343 | orchestrator | Friday 30 May 2025 00:52:51 +0000 (0:00:00.351) 0:07:25.029 ************ 2025-05-30 00:58:19.442347 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442351 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442355 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442359 | orchestrator | 2025-05-30 00:58:19.442363 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-30 00:58:19.442367 | orchestrator | Friday 30 May 2025 00:52:52 +0000 (0:00:00.641) 0:07:25.670 ************ 2025-05-30 00:58:19.442371 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442376 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442383 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442387 | orchestrator | 2025-05-30 00:58:19.442391 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-30 00:58:19.442395 | orchestrator | Friday 30 May 2025 00:52:52 +0000 (0:00:00.354) 0:07:26.025 ************ 2025-05-30 00:58:19.442411 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442416 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442422 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442427 | orchestrator | 2025-05-30 00:58:19.442431 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-30 00:58:19.442435 | orchestrator | Friday 30 May 2025 00:52:53 +0000 (0:00:00.355) 0:07:26.381 ************ 2025-05-30 00:58:19.442439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.442443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.442447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.442451 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442455 | orchestrator | 2025-05-30 00:58:19.442459 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-30 00:58:19.442463 | orchestrator | Friday 30 May 2025 00:52:53 +0000 (0:00:00.471) 0:07:26.852 ************ 2025-05-30 00:58:19.442467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.442472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.442476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.442480 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442484 | orchestrator | 2025-05-30 00:58:19.442488 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-30 00:58:19.442492 | orchestrator | Friday 30 May 2025 00:52:54 +0000 (0:00:00.424) 0:07:27.277 ************ 2025-05-30 00:58:19.442496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.442500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.442504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.442508 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442512 | orchestrator | 2025-05-30 00:58:19.442516 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.442520 | orchestrator | Friday 30 May 2025 00:52:54 +0000 (0:00:00.425) 0:07:27.702 ************ 2025-05-30 00:58:19.442525 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442529 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442533 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442537 | orchestrator | 2025-05-30 00:58:19.442541 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-30 00:58:19.442545 | orchestrator | Friday 30 May 2025 00:52:55 +0000 (0:00:00.582) 0:07:28.285 ************ 2025-05-30 00:58:19.442549 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-30 00:58:19.442553 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442557 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-30 00:58:19.442561 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442565 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-30 00:58:19.442569 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442573 | orchestrator | 2025-05-30 00:58:19.442578 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-30 00:58:19.442582 | orchestrator | Friday 30 May 2025 00:52:55 +0000 (0:00:00.604) 0:07:28.889 ************ 2025-05-30 00:58:19.442586 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442590 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442594 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442598 | orchestrator | 2025-05-30 00:58:19.442602 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.442606 | orchestrator | Friday 30 May 2025 00:52:56 +0000 (0:00:00.372) 0:07:29.262 ************ 2025-05-30 00:58:19.442613 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442617 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442621 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442625 | orchestrator | 2025-05-30 00:58:19.442630 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-30 00:58:19.442634 | orchestrator | Friday 30 May 2025 00:52:56 +0000 (0:00:00.321) 0:07:29.584 ************ 2025-05-30 00:58:19.442638 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-30 00:58:19.442642 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442646 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-30 00:58:19.442650 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442654 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-30 00:58:19.442658 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442662 | orchestrator | 2025-05-30 00:58:19.442666 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-30 00:58:19.442670 | orchestrator | Friday 30 May 2025 00:52:57 +0000 (0:00:00.845) 0:07:30.430 ************ 2025-05-30 00:58:19.442674 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.442678 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.442683 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442687 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442691 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.442695 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442699 | orchestrator | 2025-05-30 00:58:19.442703 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-30 00:58:19.442707 | orchestrator | Friday 30 May 2025 00:52:57 +0000 (0:00:00.389) 0:07:30.819 ************ 2025-05-30 00:58:19.442719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.442723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.442727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.442731 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-30 00:58:19.442753 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442758 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-30 00:58:19.442764 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-30 00:58:19.442769 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442773 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-30 00:58:19.442777 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-30 00:58:19.442781 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-30 00:58:19.442785 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442789 | orchestrator | 2025-05-30 00:58:19.442793 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-30 00:58:19.442797 | orchestrator | Friday 30 May 2025 00:52:58 +0000 (0:00:00.590) 0:07:31.409 ************ 2025-05-30 00:58:19.442801 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442806 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442810 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442814 | orchestrator | 2025-05-30 00:58:19.442818 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-30 00:58:19.442822 | orchestrator | Friday 30 May 2025 00:52:58 +0000 (0:00:00.640) 0:07:32.050 ************ 2025-05-30 00:58:19.442826 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-30 00:58:19.442830 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442834 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-30 00:58:19.442841 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442845 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-30 00:58:19.442849 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442853 | orchestrator | 2025-05-30 00:58:19.442857 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-30 00:58:19.442862 | orchestrator | Friday 30 May 2025 00:52:59 +0000 (0:00:00.510) 0:07:32.560 ************ 2025-05-30 00:58:19.442866 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442870 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442874 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442878 | orchestrator | 2025-05-30 00:58:19.442882 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-30 00:58:19.442886 | orchestrator | Friday 30 May 2025 00:53:00 +0000 (0:00:00.653) 0:07:33.213 ************ 2025-05-30 00:58:19.442890 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.442894 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.442898 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.442902 | orchestrator | 2025-05-30 00:58:19.442906 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-30 00:58:19.442927 | orchestrator | Friday 30 May 2025 00:53:00 +0000 (0:00:00.486) 0:07:33.700 ************ 2025-05-30 00:58:19.442932 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.442936 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.442940 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.442944 | orchestrator | 2025-05-30 00:58:19.442948 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-30 00:58:19.442952 | orchestrator | Friday 30 May 2025 00:53:00 +0000 (0:00:00.422) 0:07:34.122 ************ 2025-05-30 00:58:19.442957 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-30 00:58:19.442961 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 00:58:19.442965 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 00:58:19.442969 | orchestrator | 2025-05-30 00:58:19.442973 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-30 00:58:19.442977 | orchestrator | Friday 30 May 2025 00:53:01 +0000 (0:00:00.593) 0:07:34.716 ************ 2025-05-30 00:58:19.442981 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.442985 | orchestrator | 2025-05-30 00:58:19.442989 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-30 00:58:19.442994 | orchestrator | Friday 30 May 2025 00:53:02 +0000 (0:00:00.566) 0:07:35.283 ************ 2025-05-30 00:58:19.442998 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.443002 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.443006 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.443010 | orchestrator | 2025-05-30 00:58:19.443014 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-30 00:58:19.443018 | orchestrator | Friday 30 May 2025 00:53:02 +0000 (0:00:00.302) 0:07:35.585 ************ 2025-05-30 00:58:19.443022 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.443026 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.443030 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.443034 | orchestrator | 2025-05-30 00:58:19.443038 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-30 00:58:19.443043 | orchestrator | Friday 30 May 2025 00:53:03 +0000 (0:00:00.641) 0:07:36.226 ************ 2025-05-30 00:58:19.443047 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.443051 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.443055 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.443059 | orchestrator | 2025-05-30 00:58:19.443063 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-30 00:58:19.443070 | orchestrator | Friday 30 May 2025 00:53:03 +0000 (0:00:00.328) 0:07:36.555 ************ 2025-05-30 00:58:19.443074 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.443078 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.443082 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.443086 | orchestrator | 2025-05-30 00:58:19.443090 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-30 00:58:19.443094 | orchestrator | Friday 30 May 2025 00:53:03 +0000 (0:00:00.317) 0:07:36.873 ************ 2025-05-30 00:58:19.443098 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.443102 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.443107 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.443111 | orchestrator | 2025-05-30 00:58:19.443127 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-30 00:58:19.443135 | orchestrator | Friday 30 May 2025 00:53:04 +0000 (0:00:00.675) 0:07:37.548 ************ 2025-05-30 00:58:19.443139 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.443143 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.443147 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.443151 | orchestrator | 2025-05-30 00:58:19.443155 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-30 00:58:19.443159 | orchestrator | Friday 30 May 2025 00:53:05 +0000 (0:00:00.680) 0:07:38.229 ************ 2025-05-30 00:58:19.443164 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-30 00:58:19.443168 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-30 00:58:19.443172 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-30 00:58:19.443176 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-30 00:58:19.443180 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-30 00:58:19.443184 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-30 00:58:19.443188 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-30 00:58:19.443192 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-30 00:58:19.443196 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-30 00:58:19.443200 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-30 00:58:19.443204 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-30 00:58:19.443208 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-30 00:58:19.443213 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-30 00:58:19.443217 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-30 00:58:19.443221 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-30 00:58:19.443225 | orchestrator | 2025-05-30 00:58:19.443229 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-30 00:58:19.443233 | orchestrator | Friday 30 May 2025 00:53:07 +0000 (0:00:02.169) 0:07:40.398 ************ 2025-05-30 00:58:19.443237 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.443241 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.443245 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.443249 | orchestrator | 2025-05-30 00:58:19.443253 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-30 00:58:19.443257 | orchestrator | Friday 30 May 2025 00:53:07 +0000 (0:00:00.295) 0:07:40.694 ************ 2025-05-30 00:58:19.443261 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.443269 | orchestrator | 2025-05-30 00:58:19.443273 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-30 00:58:19.443277 | orchestrator | Friday 30 May 2025 00:53:08 +0000 (0:00:00.762) 0:07:41.456 ************ 2025-05-30 00:58:19.443281 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-30 00:58:19.443285 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-30 00:58:19.443289 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-30 00:58:19.443293 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-30 00:58:19.443298 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-30 00:58:19.443302 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-30 00:58:19.443306 | orchestrator | 2025-05-30 00:58:19.443310 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-30 00:58:19.443314 | orchestrator | Friday 30 May 2025 00:53:09 +0000 (0:00:01.044) 0:07:42.500 ************ 2025-05-30 00:58:19.443318 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 00:58:19.443322 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-30 00:58:19.443326 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-30 00:58:19.443330 | orchestrator | 2025-05-30 00:58:19.443334 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-30 00:58:19.443338 | orchestrator | Friday 30 May 2025 00:53:11 +0000 (0:00:01.803) 0:07:44.303 ************ 2025-05-30 00:58:19.443342 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-30 00:58:19.443346 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-30 00:58:19.443351 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.443355 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-30 00:58:19.443359 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-30 00:58:19.443363 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.443367 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-30 00:58:19.443371 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-30 00:58:19.443375 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.443379 | orchestrator | 2025-05-30 00:58:19.443383 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-30 00:58:19.443387 | orchestrator | Friday 30 May 2025 00:53:12 +0000 (0:00:01.618) 0:07:45.921 ************ 2025-05-30 00:58:19.443391 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-30 00:58:19.443395 | orchestrator | 2025-05-30 00:58:19.443410 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-30 00:58:19.443415 | orchestrator | Friday 30 May 2025 00:53:15 +0000 (0:00:02.303) 0:07:48.225 ************ 2025-05-30 00:58:19.443419 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.443423 | orchestrator | 2025-05-30 00:58:19.443427 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-30 00:58:19.443432 | orchestrator | Friday 30 May 2025 00:53:15 +0000 (0:00:00.618) 0:07:48.844 ************ 2025-05-30 00:58:19.443436 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.443440 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.443477 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.443486 | orchestrator | 2025-05-30 00:58:19.443491 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-30 00:58:19.443495 | orchestrator | Friday 30 May 2025 00:53:16 +0000 (0:00:00.520) 0:07:49.364 ************ 2025-05-30 00:58:19.443499 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.443503 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.443507 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.443511 | orchestrator | 2025-05-30 00:58:19.443519 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-30 00:58:19.443523 | orchestrator | Friday 30 May 2025 00:53:16 +0000 (0:00:00.342) 0:07:49.706 ************ 2025-05-30 00:58:19.443527 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.443531 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.443535 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.443539 | orchestrator | 2025-05-30 00:58:19.443543 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-30 00:58:19.443548 | orchestrator | Friday 30 May 2025 00:53:16 +0000 (0:00:00.343) 0:07:50.050 ************ 2025-05-30 00:58:19.443552 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.443556 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.443560 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.443564 | orchestrator | 2025-05-30 00:58:19.443568 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-30 00:58:19.443572 | orchestrator | Friday 30 May 2025 00:53:17 +0000 (0:00:00.325) 0:07:50.376 ************ 2025-05-30 00:58:19.443576 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.443580 | orchestrator | 2025-05-30 00:58:19.443584 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-30 00:58:19.443588 | orchestrator | Friday 30 May 2025 00:53:18 +0000 (0:00:00.815) 0:07:51.192 ************ 2025-05-30 00:58:19.443592 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2ff0e7ee-f669-5460-a216-2d1fc13a4a65', 'data_vg': 'ceph-2ff0e7ee-f669-5460-a216-2d1fc13a4a65'}) 2025-05-30 00:58:19.443597 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6d0cb66e-f8af-5d02-a2d6-05303feeced3', 'data_vg': 'ceph-6d0cb66e-f8af-5d02-a2d6-05303feeced3'}) 2025-05-30 00:58:19.443602 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-50b3064c-7478-543e-8abf-661fdbdc95ce', 'data_vg': 'ceph-50b3064c-7478-543e-8abf-661fdbdc95ce'}) 2025-05-30 00:58:19.443606 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-749c70bc-bf8f-56a3-a425-711d4530659c', 'data_vg': 'ceph-749c70bc-bf8f-56a3-a425-711d4530659c'}) 2025-05-30 00:58:19.443610 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dfef1ad9-1307-56b8-9770-fa52c7fc01ce', 'data_vg': 'ceph-dfef1ad9-1307-56b8-9770-fa52c7fc01ce'}) 2025-05-30 00:58:19.443614 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f43ff32d-4fc4-5ece-8353-26072ce1c913', 'data_vg': 'ceph-f43ff32d-4fc4-5ece-8353-26072ce1c913'}) 2025-05-30 00:58:19.443618 | orchestrator | 2025-05-30 00:58:19.443622 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-30 00:58:19.443626 | orchestrator | Friday 30 May 2025 00:53:56 +0000 (0:00:38.339) 0:08:29.532 ************ 2025-05-30 00:58:19.443631 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.443635 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.443639 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.443643 | orchestrator | 2025-05-30 00:58:19.443647 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-30 00:58:19.443651 | orchestrator | Friday 30 May 2025 00:53:56 +0000 (0:00:00.464) 0:08:29.996 ************ 2025-05-30 00:58:19.443655 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.443659 | orchestrator | 2025-05-30 00:58:19.443663 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-30 00:58:19.443667 | orchestrator | Friday 30 May 2025 00:53:57 +0000 (0:00:00.540) 0:08:30.537 ************ 2025-05-30 00:58:19.443672 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.443676 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.443680 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.443684 | orchestrator | 2025-05-30 00:58:19.443688 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-30 00:58:19.443695 | orchestrator | Friday 30 May 2025 00:53:58 +0000 (0:00:00.638) 0:08:31.175 ************ 2025-05-30 00:58:19.443699 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.443704 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.443708 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.443712 | orchestrator | 2025-05-30 00:58:19.443716 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-30 00:58:19.443732 | orchestrator | Friday 30 May 2025 00:53:59 +0000 (0:00:01.924) 0:08:33.099 ************ 2025-05-30 00:58:19.443740 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.443745 | orchestrator | 2025-05-30 00:58:19.443749 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-30 00:58:19.443753 | orchestrator | Friday 30 May 2025 00:54:00 +0000 (0:00:00.576) 0:08:33.676 ************ 2025-05-30 00:58:19.443757 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.443761 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.443765 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.443769 | orchestrator | 2025-05-30 00:58:19.443773 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-30 00:58:19.443777 | orchestrator | Friday 30 May 2025 00:54:01 +0000 (0:00:01.379) 0:08:35.056 ************ 2025-05-30 00:58:19.443782 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.443786 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.443790 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.443794 | orchestrator | 2025-05-30 00:58:19.443798 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-30 00:58:19.443802 | orchestrator | Friday 30 May 2025 00:54:03 +0000 (0:00:01.089) 0:08:36.145 ************ 2025-05-30 00:58:19.443806 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.443810 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.443814 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.443818 | orchestrator | 2025-05-30 00:58:19.443822 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-30 00:58:19.443827 | orchestrator | Friday 30 May 2025 00:54:04 +0000 (0:00:01.693) 0:08:37.839 ************ 2025-05-30 00:58:19.443831 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.443835 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.443839 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.443843 | orchestrator | 2025-05-30 00:58:19.443847 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-30 00:58:19.443851 | orchestrator | Friday 30 May 2025 00:54:05 +0000 (0:00:00.306) 0:08:38.145 ************ 2025-05-30 00:58:19.443855 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.443859 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.443863 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.443867 | orchestrator | 2025-05-30 00:58:19.443871 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-30 00:58:19.443876 | orchestrator | Friday 30 May 2025 00:54:05 +0000 (0:00:00.683) 0:08:38.829 ************ 2025-05-30 00:58:19.443880 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-30 00:58:19.443884 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-05-30 00:58:19.443888 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-30 00:58:19.443892 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-05-30 00:58:19.443896 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-05-30 00:58:19.443900 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-05-30 00:58:19.443904 | orchestrator | 2025-05-30 00:58:19.443908 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-30 00:58:19.443924 | orchestrator | Friday 30 May 2025 00:54:06 +0000 (0:00:01.129) 0:08:39.958 ************ 2025-05-30 00:58:19.443929 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-30 00:58:19.443933 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-30 00:58:19.443937 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-30 00:58:19.443945 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-05-30 00:58:19.443949 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-05-30 00:58:19.443953 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-05-30 00:58:19.443957 | orchestrator | 2025-05-30 00:58:19.443961 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-30 00:58:19.443965 | orchestrator | Friday 30 May 2025 00:54:10 +0000 (0:00:03.382) 0:08:43.341 ************ 2025-05-30 00:58:19.443970 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.443974 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.443978 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-30 00:58:19.443982 | orchestrator | 2025-05-30 00:58:19.443986 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-30 00:58:19.443990 | orchestrator | Friday 30 May 2025 00:54:13 +0000 (0:00:02.827) 0:08:46.169 ************ 2025-05-30 00:58:19.443994 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.443998 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444002 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-30 00:58:19.444006 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-30 00:58:19.444010 | orchestrator | 2025-05-30 00:58:19.444014 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-30 00:58:19.444019 | orchestrator | Friday 30 May 2025 00:54:25 +0000 (0:00:12.540) 0:08:58.709 ************ 2025-05-30 00:58:19.444023 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444027 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444031 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444035 | orchestrator | 2025-05-30 00:58:19.444039 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-30 00:58:19.444043 | orchestrator | Friday 30 May 2025 00:54:26 +0000 (0:00:00.450) 0:08:59.160 ************ 2025-05-30 00:58:19.444047 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444051 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444055 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444059 | orchestrator | 2025-05-30 00:58:19.444063 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-30 00:58:19.444067 | orchestrator | Friday 30 May 2025 00:54:27 +0000 (0:00:01.279) 0:09:00.440 ************ 2025-05-30 00:58:19.444072 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.444076 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.444080 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.444084 | orchestrator | 2025-05-30 00:58:19.444088 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-30 00:58:19.444103 | orchestrator | Friday 30 May 2025 00:54:28 +0000 (0:00:00.912) 0:09:01.352 ************ 2025-05-30 00:58:19.444110 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.444115 | orchestrator | 2025-05-30 00:58:19.444119 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-30 00:58:19.444123 | orchestrator | Friday 30 May 2025 00:54:28 +0000 (0:00:00.537) 0:09:01.890 ************ 2025-05-30 00:58:19.444127 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.444131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.444135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.444139 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444143 | orchestrator | 2025-05-30 00:58:19.444147 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-30 00:58:19.444152 | orchestrator | Friday 30 May 2025 00:54:29 +0000 (0:00:00.424) 0:09:02.314 ************ 2025-05-30 00:58:19.444156 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444160 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444168 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444172 | orchestrator | 2025-05-30 00:58:19.444176 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-30 00:58:19.444180 | orchestrator | Friday 30 May 2025 00:54:29 +0000 (0:00:00.329) 0:09:02.644 ************ 2025-05-30 00:58:19.444184 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444188 | orchestrator | 2025-05-30 00:58:19.444192 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-30 00:58:19.444197 | orchestrator | Friday 30 May 2025 00:54:29 +0000 (0:00:00.233) 0:09:02.877 ************ 2025-05-30 00:58:19.444201 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444205 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444209 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444213 | orchestrator | 2025-05-30 00:58:19.444217 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-30 00:58:19.444221 | orchestrator | Friday 30 May 2025 00:54:30 +0000 (0:00:00.607) 0:09:03.484 ************ 2025-05-30 00:58:19.444225 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444229 | orchestrator | 2025-05-30 00:58:19.444233 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-30 00:58:19.444237 | orchestrator | Friday 30 May 2025 00:54:30 +0000 (0:00:00.264) 0:09:03.748 ************ 2025-05-30 00:58:19.444241 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444246 | orchestrator | 2025-05-30 00:58:19.444250 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-30 00:58:19.444254 | orchestrator | Friday 30 May 2025 00:54:30 +0000 (0:00:00.249) 0:09:03.998 ************ 2025-05-30 00:58:19.444258 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444262 | orchestrator | 2025-05-30 00:58:19.444266 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-30 00:58:19.444270 | orchestrator | Friday 30 May 2025 00:54:31 +0000 (0:00:00.147) 0:09:04.145 ************ 2025-05-30 00:58:19.444274 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444278 | orchestrator | 2025-05-30 00:58:19.444282 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-30 00:58:19.444286 | orchestrator | Friday 30 May 2025 00:54:31 +0000 (0:00:00.236) 0:09:04.382 ************ 2025-05-30 00:58:19.444291 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444295 | orchestrator | 2025-05-30 00:58:19.444299 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-30 00:58:19.444303 | orchestrator | Friday 30 May 2025 00:54:31 +0000 (0:00:00.250) 0:09:04.632 ************ 2025-05-30 00:58:19.444307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.444311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.444315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.444319 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444323 | orchestrator | 2025-05-30 00:58:19.444327 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-30 00:58:19.444331 | orchestrator | Friday 30 May 2025 00:54:31 +0000 (0:00:00.501) 0:09:05.133 ************ 2025-05-30 00:58:19.444336 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444340 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444344 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444348 | orchestrator | 2025-05-30 00:58:19.444352 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-30 00:58:19.444356 | orchestrator | Friday 30 May 2025 00:54:32 +0000 (0:00:00.674) 0:09:05.808 ************ 2025-05-30 00:58:19.444360 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444364 | orchestrator | 2025-05-30 00:58:19.444368 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-30 00:58:19.444372 | orchestrator | Friday 30 May 2025 00:54:32 +0000 (0:00:00.223) 0:09:06.031 ************ 2025-05-30 00:58:19.444376 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444384 | orchestrator | 2025-05-30 00:58:19.444388 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-30 00:58:19.444392 | orchestrator | Friday 30 May 2025 00:54:33 +0000 (0:00:00.241) 0:09:06.272 ************ 2025-05-30 00:58:19.444396 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.444400 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.444404 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.444408 | orchestrator | 2025-05-30 00:58:19.444412 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-30 00:58:19.444417 | orchestrator | 2025-05-30 00:58:19.444421 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-30 00:58:19.444425 | orchestrator | Friday 30 May 2025 00:54:36 +0000 (0:00:03.239) 0:09:09.512 ************ 2025-05-30 00:58:19.444429 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.444434 | orchestrator | 2025-05-30 00:58:19.444449 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-30 00:58:19.444456 | orchestrator | Friday 30 May 2025 00:54:37 +0000 (0:00:01.342) 0:09:10.855 ************ 2025-05-30 00:58:19.444460 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444465 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444469 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.444473 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.444477 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444481 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.444485 | orchestrator | 2025-05-30 00:58:19.444489 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-30 00:58:19.444493 | orchestrator | Friday 30 May 2025 00:54:38 +0000 (0:00:00.783) 0:09:11.638 ************ 2025-05-30 00:58:19.444497 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.444501 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.444506 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.444510 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.444514 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.444518 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.444522 | orchestrator | 2025-05-30 00:58:19.444526 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-30 00:58:19.444530 | orchestrator | Friday 30 May 2025 00:54:39 +0000 (0:00:01.292) 0:09:12.930 ************ 2025-05-30 00:58:19.444534 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.444538 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.444542 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.444546 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.444551 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.444555 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.444559 | orchestrator | 2025-05-30 00:58:19.444563 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-30 00:58:19.444567 | orchestrator | Friday 30 May 2025 00:54:41 +0000 (0:00:01.264) 0:09:14.195 ************ 2025-05-30 00:58:19.444571 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.444575 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.444579 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.444583 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.444587 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.444591 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.444595 | orchestrator | 2025-05-30 00:58:19.444599 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-30 00:58:19.444604 | orchestrator | Friday 30 May 2025 00:54:42 +0000 (0:00:01.051) 0:09:15.246 ************ 2025-05-30 00:58:19.444608 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444612 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444616 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.444620 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.444627 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444631 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.444635 | orchestrator | 2025-05-30 00:58:19.444640 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-30 00:58:19.444644 | orchestrator | Friday 30 May 2025 00:54:43 +0000 (0:00:00.995) 0:09:16.242 ************ 2025-05-30 00:58:19.444648 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.444652 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.444656 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.444660 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444664 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444668 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444672 | orchestrator | 2025-05-30 00:58:19.444676 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-30 00:58:19.444680 | orchestrator | Friday 30 May 2025 00:54:43 +0000 (0:00:00.714) 0:09:16.957 ************ 2025-05-30 00:58:19.444684 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.444688 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.444692 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.444696 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444700 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444705 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444709 | orchestrator | 2025-05-30 00:58:19.444713 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-30 00:58:19.444717 | orchestrator | Friday 30 May 2025 00:54:44 +0000 (0:00:00.851) 0:09:17.808 ************ 2025-05-30 00:58:19.444721 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.444725 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.444729 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.444733 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444737 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444741 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444745 | orchestrator | 2025-05-30 00:58:19.444749 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-30 00:58:19.444753 | orchestrator | Friday 30 May 2025 00:54:45 +0000 (0:00:00.654) 0:09:18.463 ************ 2025-05-30 00:58:19.444757 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.444761 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.444766 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.444770 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444774 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444778 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444782 | orchestrator | 2025-05-30 00:58:19.444786 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-30 00:58:19.444790 | orchestrator | Friday 30 May 2025 00:54:46 +0000 (0:00:00.928) 0:09:19.391 ************ 2025-05-30 00:58:19.444794 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.444798 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.444802 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.444807 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444811 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444815 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444819 | orchestrator | 2025-05-30 00:58:19.444825 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-30 00:58:19.444829 | orchestrator | Friday 30 May 2025 00:54:47 +0000 (0:00:00.763) 0:09:20.154 ************ 2025-05-30 00:58:19.444833 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.444837 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.444841 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.444856 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.444861 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.444865 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.444869 | orchestrator | 2025-05-30 00:58:19.444876 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-30 00:58:19.444883 | orchestrator | Friday 30 May 2025 00:54:48 +0000 (0:00:01.088) 0:09:21.243 ************ 2025-05-30 00:58:19.444887 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.444892 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.444896 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.444900 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444904 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444908 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444935 | orchestrator | 2025-05-30 00:58:19.444939 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-30 00:58:19.444944 | orchestrator | Friday 30 May 2025 00:54:48 +0000 (0:00:00.648) 0:09:21.892 ************ 2025-05-30 00:58:19.444948 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.444952 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.444956 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.444960 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.444964 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.444968 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.444972 | orchestrator | 2025-05-30 00:58:19.444976 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-30 00:58:19.444981 | orchestrator | Friday 30 May 2025 00:54:49 +0000 (0:00:00.982) 0:09:22.874 ************ 2025-05-30 00:58:19.444985 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.444989 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.444993 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.444997 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.445001 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.445005 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.445009 | orchestrator | 2025-05-30 00:58:19.445013 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-30 00:58:19.445017 | orchestrator | Friday 30 May 2025 00:54:50 +0000 (0:00:00.682) 0:09:23.557 ************ 2025-05-30 00:58:19.445022 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445026 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445030 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445034 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.445038 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.445042 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.445046 | orchestrator | 2025-05-30 00:58:19.445050 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-30 00:58:19.445054 | orchestrator | Friday 30 May 2025 00:54:51 +0000 (0:00:00.883) 0:09:24.440 ************ 2025-05-30 00:58:19.445058 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445062 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445066 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445070 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.445074 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.445079 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.445083 | orchestrator | 2025-05-30 00:58:19.445087 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-30 00:58:19.445091 | orchestrator | Friday 30 May 2025 00:54:51 +0000 (0:00:00.655) 0:09:25.095 ************ 2025-05-30 00:58:19.445095 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445099 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445103 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445107 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445111 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445115 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445119 | orchestrator | 2025-05-30 00:58:19.445123 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-30 00:58:19.445127 | orchestrator | Friday 30 May 2025 00:54:52 +0000 (0:00:00.843) 0:09:25.939 ************ 2025-05-30 00:58:19.445131 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445139 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445143 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445147 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445151 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445155 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445159 | orchestrator | 2025-05-30 00:58:19.445163 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-30 00:58:19.445167 | orchestrator | Friday 30 May 2025 00:54:53 +0000 (0:00:00.623) 0:09:26.563 ************ 2025-05-30 00:58:19.445171 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.445176 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.445180 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.445184 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445188 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445192 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445196 | orchestrator | 2025-05-30 00:58:19.445200 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-30 00:58:19.445204 | orchestrator | Friday 30 May 2025 00:54:54 +0000 (0:00:00.835) 0:09:27.398 ************ 2025-05-30 00:58:19.445209 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.445212 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.445216 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.445220 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.445223 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.445227 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.445231 | orchestrator | 2025-05-30 00:58:19.445235 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-30 00:58:19.445238 | orchestrator | Friday 30 May 2025 00:54:54 +0000 (0:00:00.642) 0:09:28.041 ************ 2025-05-30 00:58:19.445242 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445246 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445249 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445253 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445257 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445260 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445264 | orchestrator | 2025-05-30 00:58:19.445268 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-30 00:58:19.445272 | orchestrator | Friday 30 May 2025 00:54:55 +0000 (0:00:00.873) 0:09:28.914 ************ 2025-05-30 00:58:19.445276 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445279 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445295 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445300 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445306 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445310 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445313 | orchestrator | 2025-05-30 00:58:19.445317 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-30 00:58:19.445321 | orchestrator | Friday 30 May 2025 00:54:56 +0000 (0:00:00.631) 0:09:29.545 ************ 2025-05-30 00:58:19.445325 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445328 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445332 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445336 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445339 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445343 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445347 | orchestrator | 2025-05-30 00:58:19.445351 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-30 00:58:19.445354 | orchestrator | Friday 30 May 2025 00:54:57 +0000 (0:00:00.830) 0:09:30.376 ************ 2025-05-30 00:58:19.445358 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445362 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445365 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445369 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445373 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445381 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445385 | orchestrator | 2025-05-30 00:58:19.445388 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-30 00:58:19.445392 | orchestrator | Friday 30 May 2025 00:54:57 +0000 (0:00:00.649) 0:09:31.025 ************ 2025-05-30 00:58:19.445396 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445399 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445403 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445407 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445411 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445414 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445418 | orchestrator | 2025-05-30 00:58:19.445422 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-30 00:58:19.445425 | orchestrator | Friday 30 May 2025 00:54:58 +0000 (0:00:00.844) 0:09:31.870 ************ 2025-05-30 00:58:19.445429 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445433 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445436 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445440 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445444 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445447 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445451 | orchestrator | 2025-05-30 00:58:19.445455 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-30 00:58:19.445459 | orchestrator | Friday 30 May 2025 00:54:59 +0000 (0:00:00.606) 0:09:32.476 ************ 2025-05-30 00:58:19.445462 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445466 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445470 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445473 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445477 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445481 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445484 | orchestrator | 2025-05-30 00:58:19.445488 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-30 00:58:19.445492 | orchestrator | Friday 30 May 2025 00:55:00 +0000 (0:00:00.875) 0:09:33.351 ************ 2025-05-30 00:58:19.445496 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445499 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445503 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445507 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445511 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445514 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445518 | orchestrator | 2025-05-30 00:58:19.445522 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-30 00:58:19.445526 | orchestrator | Friday 30 May 2025 00:55:00 +0000 (0:00:00.693) 0:09:34.045 ************ 2025-05-30 00:58:19.445529 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445533 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445537 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445540 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445544 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445548 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445551 | orchestrator | 2025-05-30 00:58:19.445555 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-30 00:58:19.445559 | orchestrator | Friday 30 May 2025 00:55:01 +0000 (0:00:00.868) 0:09:34.914 ************ 2025-05-30 00:58:19.445563 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445566 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445570 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445574 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445577 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445581 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445585 | orchestrator | 2025-05-30 00:58:19.445591 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-30 00:58:19.445595 | orchestrator | Friday 30 May 2025 00:55:02 +0000 (0:00:00.653) 0:09:35.567 ************ 2025-05-30 00:58:19.445599 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445602 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445606 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445610 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445613 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445617 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445621 | orchestrator | 2025-05-30 00:58:19.445624 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-30 00:58:19.445628 | orchestrator | Friday 30 May 2025 00:55:03 +0000 (0:00:00.893) 0:09:36.460 ************ 2025-05-30 00:58:19.445632 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445636 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445639 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445643 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445657 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445661 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445665 | orchestrator | 2025-05-30 00:58:19.445671 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-30 00:58:19.445675 | orchestrator | Friday 30 May 2025 00:55:03 +0000 (0:00:00.664) 0:09:37.125 ************ 2025-05-30 00:58:19.445679 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-30 00:58:19.445683 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-30 00:58:19.445687 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445690 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-30 00:58:19.445694 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-30 00:58:19.445698 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445701 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-30 00:58:19.445705 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-30 00:58:19.445709 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445713 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-30 00:58:19.445716 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-30 00:58:19.445720 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445724 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-30 00:58:19.445727 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-30 00:58:19.445731 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445735 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-30 00:58:19.445739 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-30 00:58:19.445742 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445746 | orchestrator | 2025-05-30 00:58:19.445750 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-30 00:58:19.445754 | orchestrator | Friday 30 May 2025 00:55:04 +0000 (0:00:00.948) 0:09:38.074 ************ 2025-05-30 00:58:19.445757 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-30 00:58:19.445761 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-30 00:58:19.445765 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445769 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-30 00:58:19.445772 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-30 00:58:19.445776 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445780 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-30 00:58:19.445783 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-30 00:58:19.445787 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445791 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-30 00:58:19.445795 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-30 00:58:19.445801 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445805 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-30 00:58:19.445809 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-30 00:58:19.445812 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445816 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-30 00:58:19.445820 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-30 00:58:19.445823 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445827 | orchestrator | 2025-05-30 00:58:19.445831 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-30 00:58:19.445835 | orchestrator | Friday 30 May 2025 00:55:05 +0000 (0:00:00.814) 0:09:38.889 ************ 2025-05-30 00:58:19.445838 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445842 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445846 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445849 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445853 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445857 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445861 | orchestrator | 2025-05-30 00:58:19.445864 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-30 00:58:19.445868 | orchestrator | Friday 30 May 2025 00:55:06 +0000 (0:00:00.942) 0:09:39.832 ************ 2025-05-30 00:58:19.445872 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445876 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445879 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445883 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445887 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445891 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445894 | orchestrator | 2025-05-30 00:58:19.445898 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-30 00:58:19.445902 | orchestrator | Friday 30 May 2025 00:55:07 +0000 (0:00:00.667) 0:09:40.499 ************ 2025-05-30 00:58:19.445906 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445909 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445925 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445929 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445932 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445936 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445940 | orchestrator | 2025-05-30 00:58:19.445943 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-30 00:58:19.445947 | orchestrator | Friday 30 May 2025 00:55:08 +0000 (0:00:00.949) 0:09:41.449 ************ 2025-05-30 00:58:19.445951 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445955 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.445958 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.445962 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.445966 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.445969 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.445973 | orchestrator | 2025-05-30 00:58:19.445977 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-30 00:58:19.445981 | orchestrator | Friday 30 May 2025 00:55:09 +0000 (0:00:00.727) 0:09:42.176 ************ 2025-05-30 00:58:19.445984 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.445999 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.446003 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.446052 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.446058 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.446062 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.446065 | orchestrator | 2025-05-30 00:58:19.446069 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-30 00:58:19.446073 | orchestrator | Friday 30 May 2025 00:55:09 +0000 (0:00:00.861) 0:09:43.038 ************ 2025-05-30 00:58:19.446080 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446084 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.446088 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.446091 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.446095 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.446099 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.446103 | orchestrator | 2025-05-30 00:58:19.446106 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-30 00:58:19.446110 | orchestrator | Friday 30 May 2025 00:55:10 +0000 (0:00:00.557) 0:09:43.596 ************ 2025-05-30 00:58:19.446114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.446118 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.446121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.446125 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446129 | orchestrator | 2025-05-30 00:58:19.446133 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-30 00:58:19.446136 | orchestrator | Friday 30 May 2025 00:55:10 +0000 (0:00:00.345) 0:09:43.941 ************ 2025-05-30 00:58:19.446140 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.446144 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.446148 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.446151 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446155 | orchestrator | 2025-05-30 00:58:19.446159 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-30 00:58:19.446162 | orchestrator | Friday 30 May 2025 00:55:11 +0000 (0:00:00.313) 0:09:44.254 ************ 2025-05-30 00:58:19.446166 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.446170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.446174 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.446177 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446181 | orchestrator | 2025-05-30 00:58:19.446185 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.446188 | orchestrator | Friday 30 May 2025 00:55:11 +0000 (0:00:00.589) 0:09:44.844 ************ 2025-05-30 00:58:19.446192 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446196 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.446200 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.446203 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.446207 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.446211 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.446214 | orchestrator | 2025-05-30 00:58:19.446218 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-30 00:58:19.446222 | orchestrator | Friday 30 May 2025 00:55:12 +0000 (0:00:00.724) 0:09:45.569 ************ 2025-05-30 00:58:19.446226 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-30 00:58:19.446229 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446233 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-30 00:58:19.446237 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.446241 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-30 00:58:19.446244 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.446248 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-30 00:58:19.446252 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.446255 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-30 00:58:19.446259 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.446263 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-30 00:58:19.446266 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.446270 | orchestrator | 2025-05-30 00:58:19.446277 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-30 00:58:19.446280 | orchestrator | Friday 30 May 2025 00:55:13 +0000 (0:00:00.884) 0:09:46.453 ************ 2025-05-30 00:58:19.446284 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446288 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.446292 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.446295 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.446299 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.446303 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.446307 | orchestrator | 2025-05-30 00:58:19.446310 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.446314 | orchestrator | Friday 30 May 2025 00:55:14 +0000 (0:00:00.732) 0:09:47.186 ************ 2025-05-30 00:58:19.446318 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446321 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.446325 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.446329 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.446333 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.446336 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.446340 | orchestrator | 2025-05-30 00:58:19.446344 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-30 00:58:19.446347 | orchestrator | Friday 30 May 2025 00:55:14 +0000 (0:00:00.693) 0:09:47.880 ************ 2025-05-30 00:58:19.446351 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-30 00:58:19.446355 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446359 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-30 00:58:19.446362 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.446366 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-30 00:58:19.446370 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.446374 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-30 00:58:19.446424 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.446429 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-30 00:58:19.446436 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.446440 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-30 00:58:19.446444 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.446448 | orchestrator | 2025-05-30 00:58:19.446451 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-30 00:58:19.446455 | orchestrator | Friday 30 May 2025 00:55:16 +0000 (0:00:01.357) 0:09:49.237 ************ 2025-05-30 00:58:19.446459 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446463 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.446466 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.446470 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.446474 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.446478 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.446482 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.446486 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.446490 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.446493 | orchestrator | 2025-05-30 00:58:19.446497 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-30 00:58:19.446501 | orchestrator | Friday 30 May 2025 00:55:16 +0000 (0:00:00.737) 0:09:49.975 ************ 2025-05-30 00:58:19.446505 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-30 00:58:19.446508 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-30 00:58:19.446512 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-30 00:58:19.446519 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446523 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-30 00:58:19.446527 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-30 00:58:19.446531 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-30 00:58:19.446534 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.446538 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-30 00:58:19.446542 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-30 00:58:19.446546 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-30 00:58:19.446549 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.446553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.446557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.446560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.446564 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-30 00:58:19.446568 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.446572 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-30 00:58:19.446575 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-30 00:58:19.446579 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.446583 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-30 00:58:19.446587 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-30 00:58:19.446590 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-30 00:58:19.446594 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.446598 | orchestrator | 2025-05-30 00:58:19.446602 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-30 00:58:19.446605 | orchestrator | Friday 30 May 2025 00:55:18 +0000 (0:00:01.215) 0:09:51.190 ************ 2025-05-30 00:58:19.446609 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446613 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.446617 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.446621 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.446624 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.446628 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.446632 | orchestrator | 2025-05-30 00:58:19.446635 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-30 00:58:19.446639 | orchestrator | Friday 30 May 2025 00:55:19 +0000 (0:00:01.097) 0:09:52.288 ************ 2025-05-30 00:58:19.446643 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446647 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.446650 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.446654 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-30 00:58:19.446658 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.446662 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-30 00:58:19.446665 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.446669 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-30 00:58:19.446673 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.446677 | orchestrator | 2025-05-30 00:58:19.446680 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-30 00:58:19.446684 | orchestrator | Friday 30 May 2025 00:55:20 +0000 (0:00:01.109) 0:09:53.397 ************ 2025-05-30 00:58:19.446688 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446692 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.446695 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.446699 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.446703 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.446706 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.446710 | orchestrator | 2025-05-30 00:58:19.446714 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-30 00:58:19.446721 | orchestrator | Friday 30 May 2025 00:55:21 +0000 (0:00:01.231) 0:09:54.629 ************ 2025-05-30 00:58:19.446736 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:19.446740 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:19.446746 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:19.446750 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.446754 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.446758 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.446761 | orchestrator | 2025-05-30 00:58:19.446765 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-30 00:58:19.446769 | orchestrator | Friday 30 May 2025 00:55:22 +0000 (0:00:01.404) 0:09:56.034 ************ 2025-05-30 00:58:19.446773 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.446776 | orchestrator | 2025-05-30 00:58:19.446780 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-30 00:58:19.446784 | orchestrator | Friday 30 May 2025 00:55:26 +0000 (0:00:03.528) 0:09:59.562 ************ 2025-05-30 00:58:19.446788 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.446791 | orchestrator | 2025-05-30 00:58:19.446795 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-30 00:58:19.446799 | orchestrator | Friday 30 May 2025 00:55:28 +0000 (0:00:01.831) 0:10:01.394 ************ 2025-05-30 00:58:19.446803 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.446806 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.446810 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.446814 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.446818 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.446821 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.446825 | orchestrator | 2025-05-30 00:58:19.446829 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-30 00:58:19.446833 | orchestrator | Friday 30 May 2025 00:55:30 +0000 (0:00:01.841) 0:10:03.236 ************ 2025-05-30 00:58:19.446836 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.446840 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.446844 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.446848 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.446851 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.446855 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.446859 | orchestrator | 2025-05-30 00:58:19.446863 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-30 00:58:19.446866 | orchestrator | Friday 30 May 2025 00:55:31 +0000 (0:00:00.901) 0:10:04.137 ************ 2025-05-30 00:58:19.446870 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.446874 | orchestrator | 2025-05-30 00:58:19.446878 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-30 00:58:19.446882 | orchestrator | Friday 30 May 2025 00:55:32 +0000 (0:00:01.511) 0:10:05.649 ************ 2025-05-30 00:58:19.446886 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.446889 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.446893 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.446897 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.446900 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.446904 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.446908 | orchestrator | 2025-05-30 00:58:19.446923 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-30 00:58:19.446927 | orchestrator | Friday 30 May 2025 00:55:34 +0000 (0:00:01.832) 0:10:07.482 ************ 2025-05-30 00:58:19.446931 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.446935 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.446938 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.446942 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.446946 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.446953 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.446957 | orchestrator | 2025-05-30 00:58:19.446960 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-30 00:58:19.446964 | orchestrator | Friday 30 May 2025 00:55:38 +0000 (0:00:03.921) 0:10:11.403 ************ 2025-05-30 00:58:19.446968 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.446972 | orchestrator | 2025-05-30 00:58:19.446976 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-30 00:58:19.446980 | orchestrator | Friday 30 May 2025 00:55:39 +0000 (0:00:01.321) 0:10:12.724 ************ 2025-05-30 00:58:19.446983 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.446987 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.446991 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.446995 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.446999 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.447002 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.447006 | orchestrator | 2025-05-30 00:58:19.447010 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-30 00:58:19.447014 | orchestrator | Friday 30 May 2025 00:55:40 +0000 (0:00:00.663) 0:10:13.387 ************ 2025-05-30 00:58:19.447017 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:19.447021 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:19.447025 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:19.447029 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.447032 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.447036 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.447040 | orchestrator | 2025-05-30 00:58:19.447044 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-30 00:58:19.447047 | orchestrator | Friday 30 May 2025 00:55:42 +0000 (0:00:02.404) 0:10:15.792 ************ 2025-05-30 00:58:19.447051 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:19.447055 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:19.447059 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:19.447062 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.447066 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.447070 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.447093 | orchestrator | 2025-05-30 00:58:19.447109 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-30 00:58:19.447113 | orchestrator | 2025-05-30 00:58:19.447117 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-30 00:58:19.447123 | orchestrator | Friday 30 May 2025 00:55:45 +0000 (0:00:02.787) 0:10:18.579 ************ 2025-05-30 00:58:19.447130 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.447133 | orchestrator | 2025-05-30 00:58:19.447137 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-30 00:58:19.447141 | orchestrator | Friday 30 May 2025 00:55:46 +0000 (0:00:00.757) 0:10:19.337 ************ 2025-05-30 00:58:19.447145 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447149 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447152 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447156 | orchestrator | 2025-05-30 00:58:19.447160 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-30 00:58:19.447164 | orchestrator | Friday 30 May 2025 00:55:46 +0000 (0:00:00.339) 0:10:19.677 ************ 2025-05-30 00:58:19.447167 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.447171 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.447175 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.447179 | orchestrator | 2025-05-30 00:58:19.447183 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-30 00:58:19.447186 | orchestrator | Friday 30 May 2025 00:55:47 +0000 (0:00:00.686) 0:10:20.363 ************ 2025-05-30 00:58:19.447194 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.447198 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.447202 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.447205 | orchestrator | 2025-05-30 00:58:19.447209 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-30 00:58:19.447213 | orchestrator | Friday 30 May 2025 00:55:47 +0000 (0:00:00.707) 0:10:21.071 ************ 2025-05-30 00:58:19.447217 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.447220 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.447224 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.447228 | orchestrator | 2025-05-30 00:58:19.447232 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-30 00:58:19.447235 | orchestrator | Friday 30 May 2025 00:55:49 +0000 (0:00:01.102) 0:10:22.173 ************ 2025-05-30 00:58:19.447239 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447243 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447247 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447250 | orchestrator | 2025-05-30 00:58:19.447254 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-30 00:58:19.447258 | orchestrator | Friday 30 May 2025 00:55:49 +0000 (0:00:00.347) 0:10:22.521 ************ 2025-05-30 00:58:19.447262 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447265 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447269 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447273 | orchestrator | 2025-05-30 00:58:19.447277 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-30 00:58:19.447280 | orchestrator | Friday 30 May 2025 00:55:49 +0000 (0:00:00.336) 0:10:22.858 ************ 2025-05-30 00:58:19.447284 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447288 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447292 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447295 | orchestrator | 2025-05-30 00:58:19.447299 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-30 00:58:19.447303 | orchestrator | Friday 30 May 2025 00:55:50 +0000 (0:00:00.331) 0:10:23.189 ************ 2025-05-30 00:58:19.447307 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447310 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447314 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447318 | orchestrator | 2025-05-30 00:58:19.447322 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-30 00:58:19.447326 | orchestrator | Friday 30 May 2025 00:55:50 +0000 (0:00:00.735) 0:10:23.924 ************ 2025-05-30 00:58:19.447329 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447333 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447337 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447341 | orchestrator | 2025-05-30 00:58:19.447345 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-30 00:58:19.447348 | orchestrator | Friday 30 May 2025 00:55:51 +0000 (0:00:00.343) 0:10:24.267 ************ 2025-05-30 00:58:19.447352 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447356 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447360 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447363 | orchestrator | 2025-05-30 00:58:19.447367 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-30 00:58:19.447371 | orchestrator | Friday 30 May 2025 00:55:51 +0000 (0:00:00.380) 0:10:24.648 ************ 2025-05-30 00:58:19.447375 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.447379 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.447382 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.447386 | orchestrator | 2025-05-30 00:58:19.447390 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-30 00:58:19.447394 | orchestrator | Friday 30 May 2025 00:55:52 +0000 (0:00:00.718) 0:10:25.366 ************ 2025-05-30 00:58:19.447397 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447405 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447409 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447413 | orchestrator | 2025-05-30 00:58:19.447417 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-30 00:58:19.447420 | orchestrator | Friday 30 May 2025 00:55:52 +0000 (0:00:00.564) 0:10:25.931 ************ 2025-05-30 00:58:19.447424 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447428 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447432 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447435 | orchestrator | 2025-05-30 00:58:19.447439 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-30 00:58:19.447443 | orchestrator | Friday 30 May 2025 00:55:53 +0000 (0:00:00.322) 0:10:26.254 ************ 2025-05-30 00:58:19.447447 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.447450 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.447454 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.447458 | orchestrator | 2025-05-30 00:58:19.447462 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-30 00:58:19.447469 | orchestrator | Friday 30 May 2025 00:55:53 +0000 (0:00:00.354) 0:10:26.608 ************ 2025-05-30 00:58:19.447473 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.447481 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.447485 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.447488 | orchestrator | 2025-05-30 00:58:19.447492 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-30 00:58:19.447496 | orchestrator | Friday 30 May 2025 00:55:53 +0000 (0:00:00.331) 0:10:26.939 ************ 2025-05-30 00:58:19.447500 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.447504 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.447507 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.447511 | orchestrator | 2025-05-30 00:58:19.447515 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-30 00:58:19.447519 | orchestrator | Friday 30 May 2025 00:55:54 +0000 (0:00:00.821) 0:10:27.761 ************ 2025-05-30 00:58:19.447522 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447526 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447530 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447534 | orchestrator | 2025-05-30 00:58:19.447538 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-30 00:58:19.447541 | orchestrator | Friday 30 May 2025 00:55:54 +0000 (0:00:00.365) 0:10:28.127 ************ 2025-05-30 00:58:19.447545 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447549 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447553 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447556 | orchestrator | 2025-05-30 00:58:19.447560 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-30 00:58:19.447564 | orchestrator | Friday 30 May 2025 00:55:55 +0000 (0:00:00.433) 0:10:28.560 ************ 2025-05-30 00:58:19.447568 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447571 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447575 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447579 | orchestrator | 2025-05-30 00:58:19.447591 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-30 00:58:19.447595 | orchestrator | Friday 30 May 2025 00:55:55 +0000 (0:00:00.329) 0:10:28.890 ************ 2025-05-30 00:58:19.447598 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.447602 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.447606 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.447610 | orchestrator | 2025-05-30 00:58:19.447613 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-30 00:58:19.447617 | orchestrator | Friday 30 May 2025 00:55:56 +0000 (0:00:00.655) 0:10:29.546 ************ 2025-05-30 00:58:19.447621 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447625 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447628 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447635 | orchestrator | 2025-05-30 00:58:19.447639 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-30 00:58:19.447643 | orchestrator | Friday 30 May 2025 00:55:56 +0000 (0:00:00.339) 0:10:29.886 ************ 2025-05-30 00:58:19.447646 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447650 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447654 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447658 | orchestrator | 2025-05-30 00:58:19.447662 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-30 00:58:19.447665 | orchestrator | Friday 30 May 2025 00:55:57 +0000 (0:00:00.351) 0:10:30.238 ************ 2025-05-30 00:58:19.447669 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447673 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447676 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447680 | orchestrator | 2025-05-30 00:58:19.447684 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-30 00:58:19.447688 | orchestrator | Friday 30 May 2025 00:55:57 +0000 (0:00:00.336) 0:10:30.574 ************ 2025-05-30 00:58:19.447691 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447695 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447699 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447703 | orchestrator | 2025-05-30 00:58:19.447706 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-30 00:58:19.447710 | orchestrator | Friday 30 May 2025 00:55:58 +0000 (0:00:00.614) 0:10:31.188 ************ 2025-05-30 00:58:19.447714 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447718 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447721 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447725 | orchestrator | 2025-05-30 00:58:19.447729 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-30 00:58:19.447733 | orchestrator | Friday 30 May 2025 00:55:58 +0000 (0:00:00.423) 0:10:31.612 ************ 2025-05-30 00:58:19.447736 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447740 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447744 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447747 | orchestrator | 2025-05-30 00:58:19.447751 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-30 00:58:19.447755 | orchestrator | Friday 30 May 2025 00:55:58 +0000 (0:00:00.433) 0:10:32.046 ************ 2025-05-30 00:58:19.447759 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447762 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447766 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447770 | orchestrator | 2025-05-30 00:58:19.447774 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-30 00:58:19.447777 | orchestrator | Friday 30 May 2025 00:55:59 +0000 (0:00:00.412) 0:10:32.459 ************ 2025-05-30 00:58:19.447781 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447785 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447789 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447792 | orchestrator | 2025-05-30 00:58:19.447796 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-30 00:58:19.447800 | orchestrator | Friday 30 May 2025 00:56:00 +0000 (0:00:00.838) 0:10:33.297 ************ 2025-05-30 00:58:19.447804 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447807 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447811 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447815 | orchestrator | 2025-05-30 00:58:19.447821 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-30 00:58:19.447828 | orchestrator | Friday 30 May 2025 00:56:00 +0000 (0:00:00.462) 0:10:33.759 ************ 2025-05-30 00:58:19.447831 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447835 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447842 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447846 | orchestrator | 2025-05-30 00:58:19.447849 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-30 00:58:19.447853 | orchestrator | Friday 30 May 2025 00:56:01 +0000 (0:00:00.473) 0:10:34.233 ************ 2025-05-30 00:58:19.447857 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447861 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447864 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447868 | orchestrator | 2025-05-30 00:58:19.447872 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-30 00:58:19.447876 | orchestrator | Friday 30 May 2025 00:56:01 +0000 (0:00:00.435) 0:10:34.669 ************ 2025-05-30 00:58:19.447879 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447883 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447887 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447891 | orchestrator | 2025-05-30 00:58:19.447894 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-30 00:58:19.447898 | orchestrator | Friday 30 May 2025 00:56:02 +0000 (0:00:00.800) 0:10:35.470 ************ 2025-05-30 00:58:19.447902 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-30 00:58:19.447906 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-30 00:58:19.447932 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447937 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-30 00:58:19.447941 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-30 00:58:19.447945 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447948 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-30 00:58:19.447952 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-30 00:58:19.447956 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.447960 | orchestrator | 2025-05-30 00:58:19.447963 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-30 00:58:19.447967 | orchestrator | Friday 30 May 2025 00:56:02 +0000 (0:00:00.483) 0:10:35.953 ************ 2025-05-30 00:58:19.447971 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-30 00:58:19.447975 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-30 00:58:19.447978 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-30 00:58:19.447982 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-30 00:58:19.447986 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.447989 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.447993 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-30 00:58:19.447997 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-30 00:58:19.448001 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448005 | orchestrator | 2025-05-30 00:58:19.448008 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-30 00:58:19.448012 | orchestrator | Friday 30 May 2025 00:56:03 +0000 (0:00:00.393) 0:10:36.347 ************ 2025-05-30 00:58:19.448016 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448020 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448023 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448027 | orchestrator | 2025-05-30 00:58:19.448031 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-30 00:58:19.448035 | orchestrator | Friday 30 May 2025 00:56:03 +0000 (0:00:00.333) 0:10:36.680 ************ 2025-05-30 00:58:19.448038 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448042 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448046 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448049 | orchestrator | 2025-05-30 00:58:19.448053 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-30 00:58:19.448057 | orchestrator | Friday 30 May 2025 00:56:04 +0000 (0:00:00.560) 0:10:37.241 ************ 2025-05-30 00:58:19.448064 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448068 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448071 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448075 | orchestrator | 2025-05-30 00:58:19.448079 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-30 00:58:19.448083 | orchestrator | Friday 30 May 2025 00:56:04 +0000 (0:00:00.312) 0:10:37.553 ************ 2025-05-30 00:58:19.448086 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448090 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448094 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448098 | orchestrator | 2025-05-30 00:58:19.448101 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-30 00:58:19.448105 | orchestrator | Friday 30 May 2025 00:56:04 +0000 (0:00:00.300) 0:10:37.854 ************ 2025-05-30 00:58:19.448109 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448113 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448116 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448120 | orchestrator | 2025-05-30 00:58:19.448124 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-30 00:58:19.448128 | orchestrator | Friday 30 May 2025 00:56:05 +0000 (0:00:00.408) 0:10:38.262 ************ 2025-05-30 00:58:19.448131 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448135 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448139 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448142 | orchestrator | 2025-05-30 00:58:19.448146 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-30 00:58:19.448150 | orchestrator | Friday 30 May 2025 00:56:05 +0000 (0:00:00.612) 0:10:38.874 ************ 2025-05-30 00:58:19.448154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.448157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.448163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.448167 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448171 | orchestrator | 2025-05-30 00:58:19.448178 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-30 00:58:19.448181 | orchestrator | Friday 30 May 2025 00:56:06 +0000 (0:00:00.425) 0:10:39.300 ************ 2025-05-30 00:58:19.448185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.448189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.448193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.448197 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448200 | orchestrator | 2025-05-30 00:58:19.448204 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-30 00:58:19.448208 | orchestrator | Friday 30 May 2025 00:56:06 +0000 (0:00:00.380) 0:10:39.680 ************ 2025-05-30 00:58:19.448212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.448216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.448219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.448223 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448227 | orchestrator | 2025-05-30 00:58:19.448231 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.448234 | orchestrator | Friday 30 May 2025 00:56:06 +0000 (0:00:00.394) 0:10:40.074 ************ 2025-05-30 00:58:19.448238 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448242 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448245 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448249 | orchestrator | 2025-05-30 00:58:19.448253 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-30 00:58:19.448257 | orchestrator | Friday 30 May 2025 00:56:07 +0000 (0:00:00.278) 0:10:40.353 ************ 2025-05-30 00:58:19.448266 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-30 00:58:19.448270 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448274 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-30 00:58:19.448277 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448281 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-30 00:58:19.448285 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448289 | orchestrator | 2025-05-30 00:58:19.448292 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-30 00:58:19.448296 | orchestrator | Friday 30 May 2025 00:56:07 +0000 (0:00:00.386) 0:10:40.740 ************ 2025-05-30 00:58:19.448300 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448303 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448307 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448311 | orchestrator | 2025-05-30 00:58:19.448315 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.448318 | orchestrator | Friday 30 May 2025 00:56:08 +0000 (0:00:00.605) 0:10:41.346 ************ 2025-05-30 00:58:19.448322 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448326 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448330 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448333 | orchestrator | 2025-05-30 00:58:19.448337 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-30 00:58:19.448341 | orchestrator | Friday 30 May 2025 00:56:08 +0000 (0:00:00.283) 0:10:41.629 ************ 2025-05-30 00:58:19.448345 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-30 00:58:19.448348 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448352 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-30 00:58:19.448356 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448359 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-30 00:58:19.448363 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448367 | orchestrator | 2025-05-30 00:58:19.448371 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-30 00:58:19.448374 | orchestrator | Friday 30 May 2025 00:56:08 +0000 (0:00:00.387) 0:10:42.016 ************ 2025-05-30 00:58:19.448378 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.448382 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448386 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.448389 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448393 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.448397 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448401 | orchestrator | 2025-05-30 00:58:19.448405 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-30 00:58:19.448408 | orchestrator | Friday 30 May 2025 00:56:09 +0000 (0:00:00.318) 0:10:42.335 ************ 2025-05-30 00:58:19.448412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.448416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.448420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.448423 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448427 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-30 00:58:19.448431 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-30 00:58:19.448434 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-30 00:58:19.448438 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448442 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-30 00:58:19.448446 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-30 00:58:19.448453 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-30 00:58:19.448459 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448462 | orchestrator | 2025-05-30 00:58:19.448469 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-30 00:58:19.448473 | orchestrator | Friday 30 May 2025 00:56:09 +0000 (0:00:00.794) 0:10:43.129 ************ 2025-05-30 00:58:19.448477 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448480 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448484 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448488 | orchestrator | 2025-05-30 00:58:19.448492 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-30 00:58:19.448495 | orchestrator | Friday 30 May 2025 00:56:10 +0000 (0:00:00.574) 0:10:43.703 ************ 2025-05-30 00:58:19.448499 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-30 00:58:19.448503 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448506 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-30 00:58:19.448510 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448514 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-30 00:58:19.448518 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448521 | orchestrator | 2025-05-30 00:58:19.448525 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-30 00:58:19.448529 | orchestrator | Friday 30 May 2025 00:56:11 +0000 (0:00:00.813) 0:10:44.517 ************ 2025-05-30 00:58:19.448533 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448536 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448540 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448544 | orchestrator | 2025-05-30 00:58:19.448548 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-30 00:58:19.448551 | orchestrator | Friday 30 May 2025 00:56:11 +0000 (0:00:00.594) 0:10:45.111 ************ 2025-05-30 00:58:19.448555 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448559 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448563 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448566 | orchestrator | 2025-05-30 00:58:19.448570 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-30 00:58:19.448574 | orchestrator | Friday 30 May 2025 00:56:13 +0000 (0:00:01.032) 0:10:46.144 ************ 2025-05-30 00:58:19.448577 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448581 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448585 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-30 00:58:19.448589 | orchestrator | 2025-05-30 00:58:19.448593 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-30 00:58:19.448596 | orchestrator | Friday 30 May 2025 00:56:13 +0000 (0:00:00.528) 0:10:46.673 ************ 2025-05-30 00:58:19.448600 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-30 00:58:19.448604 | orchestrator | 2025-05-30 00:58:19.448608 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-30 00:58:19.448611 | orchestrator | Friday 30 May 2025 00:56:15 +0000 (0:00:01.744) 0:10:48.417 ************ 2025-05-30 00:58:19.448617 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-30 00:58:19.448621 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448625 | orchestrator | 2025-05-30 00:58:19.448629 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-30 00:58:19.448633 | orchestrator | Friday 30 May 2025 00:56:15 +0000 (0:00:00.583) 0:10:49.000 ************ 2025-05-30 00:58:19.448637 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-30 00:58:19.448650 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-30 00:58:19.448654 | orchestrator | 2025-05-30 00:58:19.448658 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-30 00:58:19.448661 | orchestrator | Friday 30 May 2025 00:56:22 +0000 (0:00:06.637) 0:10:55.638 ************ 2025-05-30 00:58:19.448665 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-30 00:58:19.448669 | orchestrator | 2025-05-30 00:58:19.448673 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-30 00:58:19.448676 | orchestrator | Friday 30 May 2025 00:56:25 +0000 (0:00:03.134) 0:10:58.772 ************ 2025-05-30 00:58:19.448680 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.448684 | orchestrator | 2025-05-30 00:58:19.448687 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-30 00:58:19.448691 | orchestrator | Friday 30 May 2025 00:56:26 +0000 (0:00:00.549) 0:10:59.321 ************ 2025-05-30 00:58:19.448695 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-30 00:58:19.448699 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-30 00:58:19.448702 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-30 00:58:19.448706 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-30 00:58:19.448710 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-30 00:58:19.448716 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-30 00:58:19.448720 | orchestrator | 2025-05-30 00:58:19.448726 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-30 00:58:19.448730 | orchestrator | Friday 30 May 2025 00:56:27 +0000 (0:00:01.428) 0:11:00.749 ************ 2025-05-30 00:58:19.448733 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 00:58:19.448737 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-30 00:58:19.448741 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-30 00:58:19.448745 | orchestrator | 2025-05-30 00:58:19.448748 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-30 00:58:19.448752 | orchestrator | Friday 30 May 2025 00:56:29 +0000 (0:00:01.738) 0:11:02.488 ************ 2025-05-30 00:58:19.448756 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-30 00:58:19.448760 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-30 00:58:19.448763 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.448767 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-30 00:58:19.448771 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-30 00:58:19.448775 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.448778 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-30 00:58:19.448782 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-30 00:58:19.448786 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.448789 | orchestrator | 2025-05-30 00:58:19.448793 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-30 00:58:19.448797 | orchestrator | Friday 30 May 2025 00:56:30 +0000 (0:00:01.212) 0:11:03.701 ************ 2025-05-30 00:58:19.448801 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.448804 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.448808 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.448812 | orchestrator | 2025-05-30 00:58:19.448816 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-30 00:58:19.448823 | orchestrator | Friday 30 May 2025 00:56:31 +0000 (0:00:00.535) 0:11:04.236 ************ 2025-05-30 00:58:19.448826 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.448830 | orchestrator | 2025-05-30 00:58:19.448834 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-30 00:58:19.448838 | orchestrator | Friday 30 May 2025 00:56:31 +0000 (0:00:00.592) 0:11:04.829 ************ 2025-05-30 00:58:19.448841 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.448845 | orchestrator | 2025-05-30 00:58:19.448849 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-30 00:58:19.448853 | orchestrator | Friday 30 May 2025 00:56:32 +0000 (0:00:00.775) 0:11:05.604 ************ 2025-05-30 00:58:19.448856 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.448860 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.448864 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.448868 | orchestrator | 2025-05-30 00:58:19.448871 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-30 00:58:19.448875 | orchestrator | Friday 30 May 2025 00:56:33 +0000 (0:00:01.215) 0:11:06.820 ************ 2025-05-30 00:58:19.448879 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.448883 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.448886 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.448890 | orchestrator | 2025-05-30 00:58:19.448894 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-30 00:58:19.448898 | orchestrator | Friday 30 May 2025 00:56:34 +0000 (0:00:01.195) 0:11:08.016 ************ 2025-05-30 00:58:19.448901 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.448905 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.448909 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.448922 | orchestrator | 2025-05-30 00:58:19.448926 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-30 00:58:19.448929 | orchestrator | Friday 30 May 2025 00:56:36 +0000 (0:00:02.002) 0:11:10.018 ************ 2025-05-30 00:58:19.448933 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.448937 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.448940 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.448944 | orchestrator | 2025-05-30 00:58:19.448948 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-30 00:58:19.448952 | orchestrator | Friday 30 May 2025 00:56:38 +0000 (0:00:01.901) 0:11:11.920 ************ 2025-05-30 00:58:19.448955 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-30 00:58:19.448959 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-30 00:58:19.448963 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-30 00:58:19.448967 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.448970 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.448974 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.448978 | orchestrator | 2025-05-30 00:58:19.448981 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-30 00:58:19.448985 | orchestrator | Friday 30 May 2025 00:56:55 +0000 (0:00:17.050) 0:11:28.970 ************ 2025-05-30 00:58:19.448989 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.448993 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.448997 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.449000 | orchestrator | 2025-05-30 00:58:19.449004 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-30 00:58:19.449008 | orchestrator | Friday 30 May 2025 00:56:56 +0000 (0:00:00.669) 0:11:29.639 ************ 2025-05-30 00:58:19.449011 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.449018 | orchestrator | 2025-05-30 00:58:19.449024 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-30 00:58:19.449030 | orchestrator | Friday 30 May 2025 00:56:57 +0000 (0:00:00.759) 0:11:30.400 ************ 2025-05-30 00:58:19.449034 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.449038 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.449042 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.449046 | orchestrator | 2025-05-30 00:58:19.449049 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-30 00:58:19.449053 | orchestrator | Friday 30 May 2025 00:56:57 +0000 (0:00:00.332) 0:11:30.732 ************ 2025-05-30 00:58:19.449057 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.449061 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.449064 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.449068 | orchestrator | 2025-05-30 00:58:19.449072 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-30 00:58:19.449076 | orchestrator | Friday 30 May 2025 00:56:58 +0000 (0:00:01.225) 0:11:31.958 ************ 2025-05-30 00:58:19.449079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.449083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.449087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.449091 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449094 | orchestrator | 2025-05-30 00:58:19.449098 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-30 00:58:19.449102 | orchestrator | Friday 30 May 2025 00:56:59 +0000 (0:00:01.104) 0:11:33.063 ************ 2025-05-30 00:58:19.449105 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.449109 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.449113 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.449117 | orchestrator | 2025-05-30 00:58:19.449121 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-30 00:58:19.449124 | orchestrator | Friday 30 May 2025 00:57:00 +0000 (0:00:00.330) 0:11:33.393 ************ 2025-05-30 00:58:19.449128 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.449132 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.449135 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.449139 | orchestrator | 2025-05-30 00:58:19.449143 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-30 00:58:19.449147 | orchestrator | 2025-05-30 00:58:19.449150 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-30 00:58:19.449154 | orchestrator | Friday 30 May 2025 00:57:02 +0000 (0:00:02.010) 0:11:35.404 ************ 2025-05-30 00:58:19.449158 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.449162 | orchestrator | 2025-05-30 00:58:19.449165 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-30 00:58:19.449169 | orchestrator | Friday 30 May 2025 00:57:02 +0000 (0:00:00.687) 0:11:36.091 ************ 2025-05-30 00:58:19.449173 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449177 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449180 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449184 | orchestrator | 2025-05-30 00:58:19.449188 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-30 00:58:19.449191 | orchestrator | Friday 30 May 2025 00:57:03 +0000 (0:00:00.337) 0:11:36.429 ************ 2025-05-30 00:58:19.449195 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.449199 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.449203 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.449206 | orchestrator | 2025-05-30 00:58:19.449210 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-30 00:58:19.449214 | orchestrator | Friday 30 May 2025 00:57:04 +0000 (0:00:00.765) 0:11:37.195 ************ 2025-05-30 00:58:19.449221 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.449225 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.449228 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.449232 | orchestrator | 2025-05-30 00:58:19.449236 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-30 00:58:19.449239 | orchestrator | Friday 30 May 2025 00:57:04 +0000 (0:00:00.843) 0:11:38.038 ************ 2025-05-30 00:58:19.449243 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.449247 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.449251 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.449254 | orchestrator | 2025-05-30 00:58:19.449258 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-30 00:58:19.449262 | orchestrator | Friday 30 May 2025 00:57:05 +0000 (0:00:00.748) 0:11:38.787 ************ 2025-05-30 00:58:19.449266 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449269 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449273 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449277 | orchestrator | 2025-05-30 00:58:19.449281 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-30 00:58:19.449284 | orchestrator | Friday 30 May 2025 00:57:06 +0000 (0:00:00.355) 0:11:39.143 ************ 2025-05-30 00:58:19.449288 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449292 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449295 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449299 | orchestrator | 2025-05-30 00:58:19.449303 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-30 00:58:19.449307 | orchestrator | Friday 30 May 2025 00:57:06 +0000 (0:00:00.321) 0:11:39.465 ************ 2025-05-30 00:58:19.449310 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449314 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449318 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449322 | orchestrator | 2025-05-30 00:58:19.449325 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-30 00:58:19.449329 | orchestrator | Friday 30 May 2025 00:57:06 +0000 (0:00:00.559) 0:11:40.024 ************ 2025-05-30 00:58:19.449333 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449354 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449359 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449363 | orchestrator | 2025-05-30 00:58:19.449379 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-30 00:58:19.449385 | orchestrator | Friday 30 May 2025 00:57:07 +0000 (0:00:00.329) 0:11:40.354 ************ 2025-05-30 00:58:19.449389 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449397 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449401 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449405 | orchestrator | 2025-05-30 00:58:19.449408 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-30 00:58:19.449412 | orchestrator | Friday 30 May 2025 00:57:07 +0000 (0:00:00.325) 0:11:40.680 ************ 2025-05-30 00:58:19.449416 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449420 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449424 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449427 | orchestrator | 2025-05-30 00:58:19.449431 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-30 00:58:19.449435 | orchestrator | Friday 30 May 2025 00:57:07 +0000 (0:00:00.318) 0:11:40.999 ************ 2025-05-30 00:58:19.449439 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.449443 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.449446 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.449450 | orchestrator | 2025-05-30 00:58:19.449454 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-30 00:58:19.449458 | orchestrator | Friday 30 May 2025 00:57:08 +0000 (0:00:01.024) 0:11:42.023 ************ 2025-05-30 00:58:19.449461 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449470 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449474 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449478 | orchestrator | 2025-05-30 00:58:19.449482 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-30 00:58:19.449486 | orchestrator | Friday 30 May 2025 00:57:09 +0000 (0:00:00.346) 0:11:42.370 ************ 2025-05-30 00:58:19.449489 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449493 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449497 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449501 | orchestrator | 2025-05-30 00:58:19.449504 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-30 00:58:19.449508 | orchestrator | Friday 30 May 2025 00:57:09 +0000 (0:00:00.311) 0:11:42.681 ************ 2025-05-30 00:58:19.449512 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.449516 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.449519 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.449523 | orchestrator | 2025-05-30 00:58:19.449527 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-30 00:58:19.449531 | orchestrator | Friday 30 May 2025 00:57:09 +0000 (0:00:00.340) 0:11:43.021 ************ 2025-05-30 00:58:19.449535 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.449538 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.449542 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.449546 | orchestrator | 2025-05-30 00:58:19.449550 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-30 00:58:19.449553 | orchestrator | Friday 30 May 2025 00:57:10 +0000 (0:00:00.610) 0:11:43.632 ************ 2025-05-30 00:58:19.449557 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.449561 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.449565 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.449568 | orchestrator | 2025-05-30 00:58:19.449572 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-30 00:58:19.449576 | orchestrator | Friday 30 May 2025 00:57:10 +0000 (0:00:00.340) 0:11:43.972 ************ 2025-05-30 00:58:19.449580 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449584 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449587 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449591 | orchestrator | 2025-05-30 00:58:19.449595 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-30 00:58:19.449599 | orchestrator | Friday 30 May 2025 00:57:11 +0000 (0:00:00.311) 0:11:44.284 ************ 2025-05-30 00:58:19.449602 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449606 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449610 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449614 | orchestrator | 2025-05-30 00:58:19.449617 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-30 00:58:19.449621 | orchestrator | Friday 30 May 2025 00:57:11 +0000 (0:00:00.335) 0:11:44.619 ************ 2025-05-30 00:58:19.449625 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449629 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449632 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449636 | orchestrator | 2025-05-30 00:58:19.449640 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-30 00:58:19.449643 | orchestrator | Friday 30 May 2025 00:57:12 +0000 (0:00:00.606) 0:11:45.225 ************ 2025-05-30 00:58:19.449647 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.449651 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.449655 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.449658 | orchestrator | 2025-05-30 00:58:19.449662 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-30 00:58:19.449666 | orchestrator | Friday 30 May 2025 00:57:12 +0000 (0:00:00.339) 0:11:45.565 ************ 2025-05-30 00:58:19.449670 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449674 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449677 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449684 | orchestrator | 2025-05-30 00:58:19.449688 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-30 00:58:19.449692 | orchestrator | Friday 30 May 2025 00:57:12 +0000 (0:00:00.333) 0:11:45.898 ************ 2025-05-30 00:58:19.449696 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449700 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449703 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449707 | orchestrator | 2025-05-30 00:58:19.449711 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-30 00:58:19.449715 | orchestrator | Friday 30 May 2025 00:57:13 +0000 (0:00:00.350) 0:11:46.249 ************ 2025-05-30 00:58:19.449718 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449722 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449726 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449730 | orchestrator | 2025-05-30 00:58:19.449733 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-30 00:58:19.449737 | orchestrator | Friday 30 May 2025 00:57:13 +0000 (0:00:00.634) 0:11:46.883 ************ 2025-05-30 00:58:19.449743 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449747 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449753 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449756 | orchestrator | 2025-05-30 00:58:19.449760 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-30 00:58:19.449764 | orchestrator | Friday 30 May 2025 00:57:14 +0000 (0:00:00.336) 0:11:47.219 ************ 2025-05-30 00:58:19.449768 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449772 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449775 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449779 | orchestrator | 2025-05-30 00:58:19.449783 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-30 00:58:19.449787 | orchestrator | Friday 30 May 2025 00:57:14 +0000 (0:00:00.333) 0:11:47.552 ************ 2025-05-30 00:58:19.449790 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449794 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449798 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449802 | orchestrator | 2025-05-30 00:58:19.449805 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-30 00:58:19.449809 | orchestrator | Friday 30 May 2025 00:57:14 +0000 (0:00:00.356) 0:11:47.909 ************ 2025-05-30 00:58:19.449813 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449817 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449820 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449824 | orchestrator | 2025-05-30 00:58:19.449828 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-30 00:58:19.449832 | orchestrator | Friday 30 May 2025 00:57:15 +0000 (0:00:00.655) 0:11:48.565 ************ 2025-05-30 00:58:19.449835 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449839 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449843 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449847 | orchestrator | 2025-05-30 00:58:19.449850 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-30 00:58:19.449854 | orchestrator | Friday 30 May 2025 00:57:15 +0000 (0:00:00.328) 0:11:48.893 ************ 2025-05-30 00:58:19.449858 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449862 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449865 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449869 | orchestrator | 2025-05-30 00:58:19.449873 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-30 00:58:19.449877 | orchestrator | Friday 30 May 2025 00:57:16 +0000 (0:00:00.343) 0:11:49.237 ************ 2025-05-30 00:58:19.449880 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449884 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449892 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449896 | orchestrator | 2025-05-30 00:58:19.449899 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-30 00:58:19.449903 | orchestrator | Friday 30 May 2025 00:57:16 +0000 (0:00:00.315) 0:11:49.552 ************ 2025-05-30 00:58:19.449907 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449932 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449937 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449941 | orchestrator | 2025-05-30 00:58:19.449944 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-30 00:58:19.449948 | orchestrator | Friday 30 May 2025 00:57:17 +0000 (0:00:00.593) 0:11:50.145 ************ 2025-05-30 00:58:19.449952 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449956 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449959 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.449963 | orchestrator | 2025-05-30 00:58:19.449967 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-30 00:58:19.449971 | orchestrator | Friday 30 May 2025 00:57:17 +0000 (0:00:00.334) 0:11:50.479 ************ 2025-05-30 00:58:19.449975 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-30 00:58:19.449978 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-30 00:58:19.449982 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.449986 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-30 00:58:19.449990 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-30 00:58:19.449993 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.449997 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-30 00:58:19.450001 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-30 00:58:19.450004 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450008 | orchestrator | 2025-05-30 00:58:19.450028 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-30 00:58:19.450033 | orchestrator | Friday 30 May 2025 00:57:17 +0000 (0:00:00.401) 0:11:50.881 ************ 2025-05-30 00:58:19.450036 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-30 00:58:19.450040 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-30 00:58:19.450044 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450048 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-30 00:58:19.450051 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-30 00:58:19.450055 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450059 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-30 00:58:19.450063 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-30 00:58:19.450067 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450070 | orchestrator | 2025-05-30 00:58:19.450074 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-30 00:58:19.450078 | orchestrator | Friday 30 May 2025 00:57:18 +0000 (0:00:00.394) 0:11:51.276 ************ 2025-05-30 00:58:19.450082 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450085 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450089 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450093 | orchestrator | 2025-05-30 00:58:19.450097 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-30 00:58:19.450103 | orchestrator | Friday 30 May 2025 00:57:18 +0000 (0:00:00.605) 0:11:51.881 ************ 2025-05-30 00:58:19.450107 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450114 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450118 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450122 | orchestrator | 2025-05-30 00:58:19.450126 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-30 00:58:19.450129 | orchestrator | Friday 30 May 2025 00:57:19 +0000 (0:00:00.334) 0:11:52.216 ************ 2025-05-30 00:58:19.450137 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450141 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450145 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450148 | orchestrator | 2025-05-30 00:58:19.450152 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-30 00:58:19.450156 | orchestrator | Friday 30 May 2025 00:57:19 +0000 (0:00:00.351) 0:11:52.567 ************ 2025-05-30 00:58:19.450160 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450163 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450167 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450171 | orchestrator | 2025-05-30 00:58:19.450175 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-30 00:58:19.450178 | orchestrator | Friday 30 May 2025 00:57:19 +0000 (0:00:00.343) 0:11:52.910 ************ 2025-05-30 00:58:19.450182 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450186 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450190 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450193 | orchestrator | 2025-05-30 00:58:19.450197 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-30 00:58:19.450201 | orchestrator | Friday 30 May 2025 00:57:20 +0000 (0:00:00.632) 0:11:53.543 ************ 2025-05-30 00:58:19.450205 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450209 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450212 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450216 | orchestrator | 2025-05-30 00:58:19.450220 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-30 00:58:19.450224 | orchestrator | Friday 30 May 2025 00:57:20 +0000 (0:00:00.339) 0:11:53.882 ************ 2025-05-30 00:58:19.450228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.450231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.450235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.450239 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450243 | orchestrator | 2025-05-30 00:58:19.450246 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-30 00:58:19.450250 | orchestrator | Friday 30 May 2025 00:57:21 +0000 (0:00:00.425) 0:11:54.308 ************ 2025-05-30 00:58:19.450254 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.450258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.450262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.450265 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450269 | orchestrator | 2025-05-30 00:58:19.450273 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-30 00:58:19.450277 | orchestrator | Friday 30 May 2025 00:57:21 +0000 (0:00:00.474) 0:11:54.783 ************ 2025-05-30 00:58:19.450280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.450284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.450288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.450292 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450295 | orchestrator | 2025-05-30 00:58:19.450299 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.450303 | orchestrator | Friday 30 May 2025 00:57:22 +0000 (0:00:00.433) 0:11:55.216 ************ 2025-05-30 00:58:19.450307 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450310 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450314 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450318 | orchestrator | 2025-05-30 00:58:19.450322 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-30 00:58:19.450325 | orchestrator | Friday 30 May 2025 00:57:22 +0000 (0:00:00.326) 0:11:55.542 ************ 2025-05-30 00:58:19.450333 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-30 00:58:19.450336 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-30 00:58:19.450340 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450344 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450348 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-30 00:58:19.450352 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450355 | orchestrator | 2025-05-30 00:58:19.450359 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-30 00:58:19.450363 | orchestrator | Friday 30 May 2025 00:57:23 +0000 (0:00:00.783) 0:11:56.326 ************ 2025-05-30 00:58:19.450367 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450370 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450374 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450378 | orchestrator | 2025-05-30 00:58:19.450382 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 00:58:19.450386 | orchestrator | Friday 30 May 2025 00:57:23 +0000 (0:00:00.382) 0:11:56.709 ************ 2025-05-30 00:58:19.450389 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450393 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450397 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450401 | orchestrator | 2025-05-30 00:58:19.450404 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-30 00:58:19.450408 | orchestrator | Friday 30 May 2025 00:57:23 +0000 (0:00:00.344) 0:11:57.053 ************ 2025-05-30 00:58:19.450412 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-30 00:58:19.450416 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450419 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-30 00:58:19.450423 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450429 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-30 00:58:19.450440 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450450 | orchestrator | 2025-05-30 00:58:19.450456 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-30 00:58:19.450460 | orchestrator | Friday 30 May 2025 00:57:24 +0000 (0:00:00.438) 0:11:57.492 ************ 2025-05-30 00:58:19.450464 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.450468 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450471 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.450475 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450479 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-30 00:58:19.450483 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450487 | orchestrator | 2025-05-30 00:58:19.450490 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-30 00:58:19.450494 | orchestrator | Friday 30 May 2025 00:57:24 +0000 (0:00:00.621) 0:11:58.114 ************ 2025-05-30 00:58:19.450498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.450501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.450505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.450509 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450513 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-30 00:58:19.450516 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-30 00:58:19.450520 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-30 00:58:19.450524 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450528 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-30 00:58:19.450531 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-30 00:58:19.450538 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-30 00:58:19.450542 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450546 | orchestrator | 2025-05-30 00:58:19.450550 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-30 00:58:19.450553 | orchestrator | Friday 30 May 2025 00:57:25 +0000 (0:00:00.595) 0:11:58.709 ************ 2025-05-30 00:58:19.450557 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450561 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450565 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450568 | orchestrator | 2025-05-30 00:58:19.450572 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-30 00:58:19.450576 | orchestrator | Friday 30 May 2025 00:57:26 +0000 (0:00:00.760) 0:11:59.469 ************ 2025-05-30 00:58:19.450580 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-30 00:58:19.450583 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450587 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-30 00:58:19.450591 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450595 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-30 00:58:19.450598 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450602 | orchestrator | 2025-05-30 00:58:19.450606 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-30 00:58:19.450610 | orchestrator | Friday 30 May 2025 00:57:26 +0000 (0:00:00.580) 0:12:00.050 ************ 2025-05-30 00:58:19.450613 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450617 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450621 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450625 | orchestrator | 2025-05-30 00:58:19.450628 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-30 00:58:19.450632 | orchestrator | Friday 30 May 2025 00:57:27 +0000 (0:00:00.791) 0:12:00.841 ************ 2025-05-30 00:58:19.450636 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450640 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450643 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450647 | orchestrator | 2025-05-30 00:58:19.450651 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-30 00:58:19.450657 | orchestrator | Friday 30 May 2025 00:57:28 +0000 (0:00:00.535) 0:12:01.377 ************ 2025-05-30 00:58:19.450663 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.450669 | orchestrator | 2025-05-30 00:58:19.450675 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-30 00:58:19.450681 | orchestrator | Friday 30 May 2025 00:57:29 +0000 (0:00:00.796) 0:12:02.173 ************ 2025-05-30 00:58:19.450687 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-30 00:58:19.450694 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-30 00:58:19.450698 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-30 00:58:19.450702 | orchestrator | 2025-05-30 00:58:19.450706 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-30 00:58:19.450709 | orchestrator | Friday 30 May 2025 00:57:29 +0000 (0:00:00.711) 0:12:02.884 ************ 2025-05-30 00:58:19.450713 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 00:58:19.450717 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-30 00:58:19.450727 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-30 00:58:19.450731 | orchestrator | 2025-05-30 00:58:19.450735 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-30 00:58:19.450739 | orchestrator | Friday 30 May 2025 00:57:31 +0000 (0:00:01.767) 0:12:04.652 ************ 2025-05-30 00:58:19.450742 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-30 00:58:19.450749 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-30 00:58:19.450758 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.450762 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-30 00:58:19.450766 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-30 00:58:19.450770 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.450773 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-30 00:58:19.450796 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-30 00:58:19.450815 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.450819 | orchestrator | 2025-05-30 00:58:19.450823 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-30 00:58:19.450827 | orchestrator | Friday 30 May 2025 00:57:32 +0000 (0:00:01.220) 0:12:05.873 ************ 2025-05-30 00:58:19.450831 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450834 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450838 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450842 | orchestrator | 2025-05-30 00:58:19.450845 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-30 00:58:19.450849 | orchestrator | Friday 30 May 2025 00:57:33 +0000 (0:00:00.790) 0:12:06.664 ************ 2025-05-30 00:58:19.450853 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450857 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.450860 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.450864 | orchestrator | 2025-05-30 00:58:19.450868 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-30 00:58:19.450872 | orchestrator | Friday 30 May 2025 00:57:33 +0000 (0:00:00.372) 0:12:07.036 ************ 2025-05-30 00:58:19.450875 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-30 00:58:19.450879 | orchestrator | 2025-05-30 00:58:19.450883 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-30 00:58:19.450887 | orchestrator | Friday 30 May 2025 00:57:34 +0000 (0:00:00.235) 0:12:07.272 ************ 2025-05-30 00:58:19.450890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.450894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.450898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.450902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.450906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.450921 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.450928 | orchestrator | 2025-05-30 00:58:19.450935 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-30 00:58:19.450940 | orchestrator | Friday 30 May 2025 00:57:35 +0000 (0:00:01.091) 0:12:08.363 ************ 2025-05-30 00:58:19.450963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.450970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.450997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.451003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.451009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.451016 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.451025 | orchestrator | 2025-05-30 00:58:19.451029 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-30 00:58:19.451032 | orchestrator | Friday 30 May 2025 00:57:36 +0000 (0:00:00.894) 0:12:09.257 ************ 2025-05-30 00:58:19.451036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.451040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.451044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.451047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.451051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-30 00:58:19.451055 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.451059 | orchestrator | 2025-05-30 00:58:19.451062 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-30 00:58:19.451066 | orchestrator | Friday 30 May 2025 00:57:36 +0000 (0:00:00.677) 0:12:09.935 ************ 2025-05-30 00:58:19.451076 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-30 00:58:19.451081 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-30 00:58:19.451085 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-30 00:58:19.451088 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-30 00:58:19.451092 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-30 00:58:19.451096 | orchestrator | 2025-05-30 00:58:19.451100 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-30 00:58:19.451103 | orchestrator | Friday 30 May 2025 00:58:01 +0000 (0:00:24.264) 0:12:34.199 ************ 2025-05-30 00:58:19.451107 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.451111 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.451114 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.451118 | orchestrator | 2025-05-30 00:58:19.451122 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-30 00:58:19.451126 | orchestrator | Friday 30 May 2025 00:58:01 +0000 (0:00:00.495) 0:12:34.695 ************ 2025-05-30 00:58:19.451129 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.451133 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.451137 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.451140 | orchestrator | 2025-05-30 00:58:19.451144 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-30 00:58:19.451148 | orchestrator | Friday 30 May 2025 00:58:01 +0000 (0:00:00.349) 0:12:35.045 ************ 2025-05-30 00:58:19.451152 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.451155 | orchestrator | 2025-05-30 00:58:19.451159 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-30 00:58:19.451163 | orchestrator | Friday 30 May 2025 00:58:02 +0000 (0:00:00.528) 0:12:35.573 ************ 2025-05-30 00:58:19.451166 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.451173 | orchestrator | 2025-05-30 00:58:19.451177 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-30 00:58:19.451181 | orchestrator | Friday 30 May 2025 00:58:03 +0000 (0:00:00.824) 0:12:36.398 ************ 2025-05-30 00:58:19.451185 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.451188 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.451192 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.451196 | orchestrator | 2025-05-30 00:58:19.451199 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-30 00:58:19.451203 | orchestrator | Friday 30 May 2025 00:58:04 +0000 (0:00:01.196) 0:12:37.594 ************ 2025-05-30 00:58:19.451207 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.451211 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.451214 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.451218 | orchestrator | 2025-05-30 00:58:19.451222 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-30 00:58:19.451225 | orchestrator | Friday 30 May 2025 00:58:05 +0000 (0:00:01.172) 0:12:38.767 ************ 2025-05-30 00:58:19.451229 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.451233 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.451236 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.451240 | orchestrator | 2025-05-30 00:58:19.451244 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-30 00:58:19.451247 | orchestrator | Friday 30 May 2025 00:58:07 +0000 (0:00:02.127) 0:12:40.894 ************ 2025-05-30 00:58:19.451251 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-30 00:58:19.451255 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-30 00:58:19.451259 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-30 00:58:19.451262 | orchestrator | 2025-05-30 00:58:19.451266 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-30 00:58:19.451270 | orchestrator | Friday 30 May 2025 00:58:09 +0000 (0:00:01.991) 0:12:42.886 ************ 2025-05-30 00:58:19.451274 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.451277 | orchestrator | skipping: [testbed-node-4] 2025-05-30 00:58:19.451281 | orchestrator | skipping: [testbed-node-5] 2025-05-30 00:58:19.451285 | orchestrator | 2025-05-30 00:58:19.451288 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-30 00:58:19.451292 | orchestrator | Friday 30 May 2025 00:58:10 +0000 (0:00:01.216) 0:12:44.102 ************ 2025-05-30 00:58:19.451296 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.451300 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.451303 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.451307 | orchestrator | 2025-05-30 00:58:19.451311 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-30 00:58:19.451314 | orchestrator | Friday 30 May 2025 00:58:11 +0000 (0:00:00.719) 0:12:44.822 ************ 2025-05-30 00:58:19.451320 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 00:58:19.451324 | orchestrator | 2025-05-30 00:58:19.451332 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-30 00:58:19.451336 | orchestrator | Friday 30 May 2025 00:58:12 +0000 (0:00:00.837) 0:12:45.659 ************ 2025-05-30 00:58:19.451340 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.451344 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.451347 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.451351 | orchestrator | 2025-05-30 00:58:19.451355 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-30 00:58:19.451360 | orchestrator | Friday 30 May 2025 00:58:12 +0000 (0:00:00.305) 0:12:45.964 ************ 2025-05-30 00:58:19.451370 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.451376 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.451383 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.451387 | orchestrator | 2025-05-30 00:58:19.451391 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-30 00:58:19.451395 | orchestrator | Friday 30 May 2025 00:58:14 +0000 (0:00:01.294) 0:12:47.258 ************ 2025-05-30 00:58:19.451398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 00:58:19.451402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 00:58:19.451406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 00:58:19.451409 | orchestrator | skipping: [testbed-node-3] 2025-05-30 00:58:19.451413 | orchestrator | 2025-05-30 00:58:19.451417 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-30 00:58:19.451421 | orchestrator | Friday 30 May 2025 00:58:15 +0000 (0:00:01.018) 0:12:48.277 ************ 2025-05-30 00:58:19.451424 | orchestrator | ok: [testbed-node-3] 2025-05-30 00:58:19.451428 | orchestrator | ok: [testbed-node-4] 2025-05-30 00:58:19.451432 | orchestrator | ok: [testbed-node-5] 2025-05-30 00:58:19.451435 | orchestrator | 2025-05-30 00:58:19.451439 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-30 00:58:19.451443 | orchestrator | Friday 30 May 2025 00:58:15 +0000 (0:00:00.437) 0:12:48.715 ************ 2025-05-30 00:58:19.451447 | orchestrator | changed: [testbed-node-3] 2025-05-30 00:58:19.451450 | orchestrator | changed: [testbed-node-4] 2025-05-30 00:58:19.451454 | orchestrator | changed: [testbed-node-5] 2025-05-30 00:58:19.451458 | orchestrator | 2025-05-30 00:58:19.451461 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:58:19.451465 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-30 00:58:19.451469 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-30 00:58:19.451473 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-30 00:58:19.451477 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-30 00:58:19.451480 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-30 00:58:19.451484 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-30 00:58:19.451488 | orchestrator | 2025-05-30 00:58:19.451492 | orchestrator | 2025-05-30 00:58:19.451495 | orchestrator | 2025-05-30 00:58:19.451499 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 00:58:19.451503 | orchestrator | Friday 30 May 2025 00:58:16 +0000 (0:00:01.233) 0:12:49.949 ************ 2025-05-30 00:58:19.451507 | orchestrator | =============================================================================== 2025-05-30 00:58:19.451510 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 46.33s 2025-05-30 00:58:19.451514 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 38.34s 2025-05-30 00:58:19.451518 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 24.26s 2025-05-30 00:58:19.451522 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.48s 2025-05-30 00:58:19.451525 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.05s 2025-05-30 00:58:19.451529 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.30s 2025-05-30 00:58:19.451533 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.54s 2025-05-30 00:58:19.451540 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.25s 2025-05-30 00:58:19.451544 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.36s 2025-05-30 00:58:19.451548 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.64s 2025-05-30 00:58:19.451551 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.49s 2025-05-30 00:58:19.451555 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.08s 2025-05-30 00:58:19.451559 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.73s 2025-05-30 00:58:19.451563 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 5.46s 2025-05-30 00:58:19.451566 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 4.33s 2025-05-30 00:58:19.451570 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 3.92s 2025-05-30 00:58:19.451577 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.53s 2025-05-30 00:58:19.451583 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.38s 2025-05-30 00:58:19.451587 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.35s 2025-05-30 00:58:19.451591 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.24s 2025-05-30 00:58:19.451594 | orchestrator | 2025-05-30 00:58:19 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:19.451598 | orchestrator | 2025-05-30 00:58:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:22.451525 | orchestrator | 2025-05-30 00:58:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:22.453492 | orchestrator | 2025-05-30 00:58:22 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state STARTED 2025-05-30 00:58:22.454696 | orchestrator | 2025-05-30 00:58:22 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:22.454769 | orchestrator | 2025-05-30 00:58:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:25.508424 | orchestrator | 2025-05-30 00:58:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:25.511065 | orchestrator | 2025-05-30 00:58:25 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:58:25.514748 | orchestrator | 2025-05-30 00:58:25 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:58:25.520309 | orchestrator | 2025-05-30 00:58:25 | INFO  | Task 689b1e7a-ebcc-4efa-9c5f-2d9a1b22460c is in state SUCCESS 2025-05-30 00:58:25.522437 | orchestrator | 2025-05-30 00:58:25.522471 | orchestrator | 2025-05-30 00:58:25.522483 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-30 00:58:25.522495 | orchestrator | 2025-05-30 00:58:25.522506 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-30 00:58:25.522518 | orchestrator | Friday 30 May 2025 00:55:02 +0000 (0:00:00.167) 0:00:00.167 ************ 2025-05-30 00:58:25.522529 | orchestrator | ok: [localhost] => { 2025-05-30 00:58:25.522542 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-30 00:58:25.522553 | orchestrator | } 2025-05-30 00:58:25.522565 | orchestrator | 2025-05-30 00:58:25.522576 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-30 00:58:25.522587 | orchestrator | Friday 30 May 2025 00:55:02 +0000 (0:00:00.040) 0:00:00.208 ************ 2025-05-30 00:58:25.522598 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-30 00:58:25.522610 | orchestrator | ...ignoring 2025-05-30 00:58:25.522622 | orchestrator | 2025-05-30 00:58:25.522657 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-30 00:58:25.522668 | orchestrator | Friday 30 May 2025 00:55:04 +0000 (0:00:02.569) 0:00:02.778 ************ 2025-05-30 00:58:25.522751 | orchestrator | skipping: [localhost] 2025-05-30 00:58:25.522780 | orchestrator | 2025-05-30 00:58:25.522792 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-30 00:58:25.522803 | orchestrator | Friday 30 May 2025 00:55:04 +0000 (0:00:00.052) 0:00:02.830 ************ 2025-05-30 00:58:25.522814 | orchestrator | ok: [localhost] 2025-05-30 00:58:25.522825 | orchestrator | 2025-05-30 00:58:25.522837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 00:58:25.522847 | orchestrator | 2025-05-30 00:58:25.522858 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 00:58:25.522869 | orchestrator | Friday 30 May 2025 00:55:05 +0000 (0:00:00.262) 0:00:03.093 ************ 2025-05-30 00:58:25.522880 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:25.522891 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:25.522902 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:25.522989 | orchestrator | 2025-05-30 00:58:25.523013 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 00:58:25.523028 | orchestrator | Friday 30 May 2025 00:55:05 +0000 (0:00:00.514) 0:00:03.607 ************ 2025-05-30 00:58:25.523039 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-30 00:58:25.523051 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-30 00:58:25.523062 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-30 00:58:25.523072 | orchestrator | 2025-05-30 00:58:25.523083 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-30 00:58:25.523094 | orchestrator | 2025-05-30 00:58:25.523105 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-30 00:58:25.523116 | orchestrator | Friday 30 May 2025 00:55:06 +0000 (0:00:00.470) 0:00:04.078 ************ 2025-05-30 00:58:25.523127 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 00:58:25.523137 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-30 00:58:25.523148 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-30 00:58:25.523159 | orchestrator | 2025-05-30 00:58:25.523170 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-30 00:58:25.523180 | orchestrator | Friday 30 May 2025 00:55:06 +0000 (0:00:00.655) 0:00:04.733 ************ 2025-05-30 00:58:25.523191 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:25.523203 | orchestrator | 2025-05-30 00:58:25.523214 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-30 00:58:25.523225 | orchestrator | Friday 30 May 2025 00:55:07 +0000 (0:00:00.748) 0:00:05.481 ************ 2025-05-30 00:58:25.523274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-30 00:58:25.523305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-30 00:58:25.523330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-30 00:58:25.523352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-30 00:58:25.523365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-30 00:58:25.523376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-30 00:58:25.523388 | orchestrator | 2025-05-30 00:58:25.523399 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-30 00:58:25.523411 | orchestrator | Friday 30 May 2025 00:55:11 +0000 (0:00:03.885) 0:00:09.367 ************ 2025-05-30 00:58:25.523422 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.523434 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.523445 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.523455 | orchestrator | 2025-05-30 00:58:25.523466 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-30 00:58:25.523477 | orchestrator | Friday 30 May 2025 00:55:12 +0000 (0:00:00.947) 0:00:10.314 ************ 2025-05-30 00:58:25.523488 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.523499 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.523509 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.523520 | orchestrator | 2025-05-30 00:58:25.523531 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-30 00:58:25.523542 | orchestrator | Friday 30 May 2025 00:55:13 +0000 (0:00:01.510) 0:00:11.825 ************ 2025-05-30 00:58:25.523566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-30 00:58:25.523588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-30 00:58:25.523606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-30 00:58:25.523632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-30 00:58:25.523645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-30 00:58:25.523657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-30 00:58:25.523668 | orchestrator | 2025-05-30 00:58:25.523679 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-30 00:58:25.523691 | orchestrator | Friday 30 May 2025 00:55:19 +0000 (0:00:05.819) 0:00:17.644 ************ 2025-05-30 00:58:25.523701 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.523712 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.523723 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.523734 | orchestrator | 2025-05-30 00:58:25.523744 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-30 00:58:25.523755 | orchestrator | Friday 30 May 2025 00:55:20 +0000 (0:00:00.978) 0:00:18.623 ************ 2025-05-30 00:58:25.523766 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.523776 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:25.523787 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:25.523798 | orchestrator | 2025-05-30 00:58:25.523808 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-30 00:58:25.523819 | orchestrator | Friday 30 May 2025 00:55:27 +0000 (0:00:06.533) 0:00:25.157 ************ 2025-05-30 00:58:25.523842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-30 00:58:25.523862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-30 00:58:25.523880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-30 00:58:25.523905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-30 00:58:25.523938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-30 00:58:25.523950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-30 00:58:25.523961 | orchestrator | 2025-05-30 00:58:25.523972 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-30 00:58:25.523983 | orchestrator | Friday 30 May 2025 00:55:30 +0000 (0:00:03.647) 0:00:28.804 ************ 2025-05-30 00:58:25.523994 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.524004 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:25.524015 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:25.524025 | orchestrator | 2025-05-30 00:58:25.524036 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-30 00:58:25.524047 | orchestrator | Friday 30 May 2025 00:55:31 +0000 (0:00:01.003) 0:00:29.808 ************ 2025-05-30 00:58:25.524064 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:25.524075 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:25.524086 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:25.524096 | orchestrator | 2025-05-30 00:58:25.524107 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-30 00:58:25.524118 | orchestrator | Friday 30 May 2025 00:55:32 +0000 (0:00:00.347) 0:00:30.155 ************ 2025-05-30 00:58:25.524129 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:25.524139 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:25.524150 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:25.524161 | orchestrator | 2025-05-30 00:58:25.524171 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-30 00:58:25.524182 | orchestrator | Friday 30 May 2025 00:55:32 +0000 (0:00:00.258) 0:00:30.414 ************ 2025-05-30 00:58:25.524205 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-30 00:58:25.524217 | orchestrator | ...ignoring 2025-05-30 00:58:25.524228 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-30 00:58:25.524239 | orchestrator | ...ignoring 2025-05-30 00:58:25.524250 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-30 00:58:25.524260 | orchestrator | ...ignoring 2025-05-30 00:58:25.524271 | orchestrator | 2025-05-30 00:58:25.524282 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-30 00:58:25.524293 | orchestrator | Friday 30 May 2025 00:55:43 +0000 (0:00:10.955) 0:00:41.369 ************ 2025-05-30 00:58:25.524303 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:25.524314 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:25.524325 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:25.524335 | orchestrator | 2025-05-30 00:58:25.524346 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-30 00:58:25.524357 | orchestrator | Friday 30 May 2025 00:55:44 +0000 (0:00:00.550) 0:00:41.920 ************ 2025-05-30 00:58:25.524368 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:25.524378 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.524389 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.524400 | orchestrator | 2025-05-30 00:58:25.524411 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-30 00:58:25.524421 | orchestrator | Friday 30 May 2025 00:55:44 +0000 (0:00:00.541) 0:00:42.462 ************ 2025-05-30 00:58:25.524432 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:25.524443 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.524454 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.524464 | orchestrator | 2025-05-30 00:58:25.524481 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-30 00:58:25.524492 | orchestrator | Friday 30 May 2025 00:55:45 +0000 (0:00:00.496) 0:00:42.958 ************ 2025-05-30 00:58:25.524503 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:25.524514 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.524525 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.524536 | orchestrator | 2025-05-30 00:58:25.524546 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-30 00:58:25.524557 | orchestrator | Friday 30 May 2025 00:55:45 +0000 (0:00:00.537) 0:00:43.495 ************ 2025-05-30 00:58:25.524568 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:25.524578 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:25.524589 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:25.524600 | orchestrator | 2025-05-30 00:58:25.524610 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-30 00:58:25.524621 | orchestrator | Friday 30 May 2025 00:55:46 +0000 (0:00:00.548) 0:00:44.044 ************ 2025-05-30 00:58:25.524639 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:25.524649 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.524660 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.524671 | orchestrator | 2025-05-30 00:58:25.524682 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-30 00:58:25.524692 | orchestrator | Friday 30 May 2025 00:55:46 +0000 (0:00:00.591) 0:00:44.636 ************ 2025-05-30 00:58:25.524703 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.524714 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.524725 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-30 00:58:25.524736 | orchestrator | 2025-05-30 00:58:25.524746 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-30 00:58:25.524757 | orchestrator | Friday 30 May 2025 00:55:47 +0000 (0:00:00.570) 0:00:45.206 ************ 2025-05-30 00:58:25.524768 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.524778 | orchestrator | 2025-05-30 00:58:25.524789 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-30 00:58:25.524800 | orchestrator | Friday 30 May 2025 00:55:58 +0000 (0:00:10.820) 0:00:56.027 ************ 2025-05-30 00:58:25.524811 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:25.524822 | orchestrator | 2025-05-30 00:58:25.524833 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-30 00:58:25.524844 | orchestrator | Friday 30 May 2025 00:55:58 +0000 (0:00:00.117) 0:00:56.144 ************ 2025-05-30 00:58:25.524854 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:25.524865 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.524876 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.524887 | orchestrator | 2025-05-30 00:58:25.524898 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-30 00:58:25.524909 | orchestrator | Friday 30 May 2025 00:55:59 +0000 (0:00:01.331) 0:00:57.475 ************ 2025-05-30 00:58:25.524937 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.524948 | orchestrator | 2025-05-30 00:58:25.524959 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-30 00:58:25.524970 | orchestrator | Friday 30 May 2025 00:56:07 +0000 (0:00:07.697) 0:01:05.173 ************ 2025-05-30 00:58:25.524980 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:25.524991 | orchestrator | 2025-05-30 00:58:25.525002 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-30 00:58:25.525013 | orchestrator | Friday 30 May 2025 00:56:08 +0000 (0:00:01.529) 0:01:06.702 ************ 2025-05-30 00:58:25.525024 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:25.525035 | orchestrator | 2025-05-30 00:58:25.525046 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-30 00:58:25.525056 | orchestrator | Friday 30 May 2025 00:56:11 +0000 (0:00:02.515) 0:01:09.218 ************ 2025-05-30 00:58:25.525067 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.525078 | orchestrator | 2025-05-30 00:58:25.525088 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-30 00:58:25.525104 | orchestrator | Friday 30 May 2025 00:56:11 +0000 (0:00:00.140) 0:01:09.358 ************ 2025-05-30 00:58:25.525115 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:25.525126 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.525136 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.525147 | orchestrator | 2025-05-30 00:58:25.525158 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-30 00:58:25.525169 | orchestrator | Friday 30 May 2025 00:56:11 +0000 (0:00:00.498) 0:01:09.856 ************ 2025-05-30 00:58:25.525179 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:25.525190 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:25.525201 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:25.525211 | orchestrator | 2025-05-30 00:58:25.525222 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-30 00:58:25.525239 | orchestrator | Friday 30 May 2025 00:56:12 +0000 (0:00:00.467) 0:01:10.323 ************ 2025-05-30 00:58:25.525249 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-30 00:58:25.525260 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.525271 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:25.525282 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:25.525292 | orchestrator | 2025-05-30 00:58:25.525303 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-30 00:58:25.525314 | orchestrator | skipping: no hosts matched 2025-05-30 00:58:25.525324 | orchestrator | 2025-05-30 00:58:25.525335 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-30 00:58:25.525346 | orchestrator | 2025-05-30 00:58:25.525357 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-30 00:58:25.525368 | orchestrator | Friday 30 May 2025 00:56:31 +0000 (0:00:18.832) 0:01:29.156 ************ 2025-05-30 00:58:25.525379 | orchestrator | changed: [testbed-node-1] 2025-05-30 00:58:25.525389 | orchestrator | 2025-05-30 00:58:25.525400 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-30 00:58:25.525411 | orchestrator | Friday 30 May 2025 00:56:52 +0000 (0:00:21.126) 0:01:50.282 ************ 2025-05-30 00:58:25.525427 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:25.525439 | orchestrator | 2025-05-30 00:58:25.525450 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-30 00:58:25.525460 | orchestrator | Friday 30 May 2025 00:57:07 +0000 (0:00:15.537) 0:02:05.820 ************ 2025-05-30 00:58:25.525471 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:25.525482 | orchestrator | 2025-05-30 00:58:25.525492 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-30 00:58:25.525503 | orchestrator | 2025-05-30 00:58:25.525514 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-30 00:58:25.525525 | orchestrator | Friday 30 May 2025 00:57:10 +0000 (0:00:02.537) 0:02:08.358 ************ 2025-05-30 00:58:25.525536 | orchestrator | changed: [testbed-node-2] 2025-05-30 00:58:25.525546 | orchestrator | 2025-05-30 00:58:25.525557 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-30 00:58:25.525568 | orchestrator | Friday 30 May 2025 00:57:26 +0000 (0:00:16.262) 0:02:24.620 ************ 2025-05-30 00:58:25.525579 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:25.525589 | orchestrator | 2025-05-30 00:58:25.525600 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-30 00:58:25.525611 | orchestrator | Friday 30 May 2025 00:57:47 +0000 (0:00:20.536) 0:02:45.157 ************ 2025-05-30 00:58:25.525622 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:25.525632 | orchestrator | 2025-05-30 00:58:25.525643 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-30 00:58:25.525654 | orchestrator | 2025-05-30 00:58:25.525665 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-30 00:58:25.525676 | orchestrator | Friday 30 May 2025 00:57:49 +0000 (0:00:02.520) 0:02:47.677 ************ 2025-05-30 00:58:25.525686 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.525697 | orchestrator | 2025-05-30 00:58:25.525708 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-30 00:58:25.525719 | orchestrator | Friday 30 May 2025 00:58:02 +0000 (0:00:12.759) 0:03:00.436 ************ 2025-05-30 00:58:25.525730 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:25.525741 | orchestrator | 2025-05-30 00:58:25.525751 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-30 00:58:25.525762 | orchestrator | Friday 30 May 2025 00:58:07 +0000 (0:00:04.550) 0:03:04.987 ************ 2025-05-30 00:58:25.525773 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:25.525784 | orchestrator | 2025-05-30 00:58:25.525794 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-30 00:58:25.525805 | orchestrator | 2025-05-30 00:58:25.525816 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-30 00:58:25.525834 | orchestrator | Friday 30 May 2025 00:58:09 +0000 (0:00:02.497) 0:03:07.485 ************ 2025-05-30 00:58:25.525845 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 00:58:25.525856 | orchestrator | 2025-05-30 00:58:25.525867 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-30 00:58:25.525877 | orchestrator | Friday 30 May 2025 00:58:10 +0000 (0:00:00.725) 0:03:08.210 ************ 2025-05-30 00:58:25.525888 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.525899 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.525910 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.525938 | orchestrator | 2025-05-30 00:58:25.525949 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-30 00:58:25.525960 | orchestrator | Friday 30 May 2025 00:58:12 +0000 (0:00:02.591) 0:03:10.801 ************ 2025-05-30 00:58:25.525971 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.525981 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.525992 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.526003 | orchestrator | 2025-05-30 00:58:25.526054 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-30 00:58:25.526068 | orchestrator | Friday 30 May 2025 00:58:15 +0000 (0:00:02.201) 0:03:13.003 ************ 2025-05-30 00:58:25.526078 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.526089 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.526100 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.526111 | orchestrator | 2025-05-30 00:58:25.526127 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-30 00:58:25.526139 | orchestrator | Friday 30 May 2025 00:58:17 +0000 (0:00:02.371) 0:03:15.375 ************ 2025-05-30 00:58:25.526150 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.526160 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.526171 | orchestrator | changed: [testbed-node-0] 2025-05-30 00:58:25.526182 | orchestrator | 2025-05-30 00:58:25.526193 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-30 00:58:25.526203 | orchestrator | Friday 30 May 2025 00:58:19 +0000 (0:00:02.150) 0:03:17.526 ************ 2025-05-30 00:58:25.526214 | orchestrator | ok: [testbed-node-0] 2025-05-30 00:58:25.526225 | orchestrator | ok: [testbed-node-1] 2025-05-30 00:58:25.526236 | orchestrator | ok: [testbed-node-2] 2025-05-30 00:58:25.526247 | orchestrator | 2025-05-30 00:58:25.526257 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-30 00:58:25.526268 | orchestrator | Friday 30 May 2025 00:58:23 +0000 (0:00:03.352) 0:03:20.878 ************ 2025-05-30 00:58:25.526279 | orchestrator | skipping: [testbed-node-0] 2025-05-30 00:58:25.526290 | orchestrator | skipping: [testbed-node-1] 2025-05-30 00:58:25.526301 | orchestrator | skipping: [testbed-node-2] 2025-05-30 00:58:25.526311 | orchestrator | 2025-05-30 00:58:25.526322 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 00:58:25.526333 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-30 00:58:25.526345 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-30 00:58:25.526364 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-30 00:58:25.526376 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-30 00:58:25.526387 | orchestrator | 2025-05-30 00:58:25.526398 | orchestrator | 2025-05-30 00:58:25.526408 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 00:58:25.526420 | orchestrator | Friday 30 May 2025 00:58:23 +0000 (0:00:00.383) 0:03:21.262 ************ 2025-05-30 00:58:25.526438 | orchestrator | =============================================================================== 2025-05-30 00:58:25.526449 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.39s 2025-05-30 00:58:25.526460 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.07s 2025-05-30 00:58:25.526471 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 18.83s 2025-05-30 00:58:25.526482 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.76s 2025-05-30 00:58:25.526493 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.96s 2025-05-30 00:58:25.526503 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.82s 2025-05-30 00:58:25.526514 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.70s 2025-05-30 00:58:25.526525 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 6.53s 2025-05-30 00:58:25.526536 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.82s 2025-05-30 00:58:25.526547 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.06s 2025-05-30 00:58:25.526557 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.55s 2025-05-30 00:58:25.526568 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.89s 2025-05-30 00:58:25.526579 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.65s 2025-05-30 00:58:25.526590 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.35s 2025-05-30 00:58:25.526601 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.59s 2025-05-30 00:58:25.526611 | orchestrator | Check MariaDB service --------------------------------------------------- 2.57s 2025-05-30 00:58:25.526622 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.52s 2025-05-30 00:58:25.526633 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.50s 2025-05-30 00:58:25.526644 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.37s 2025-05-30 00:58:25.526655 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.20s 2025-05-30 00:58:25.526666 | orchestrator | 2025-05-30 00:58:25 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:25.526677 | orchestrator | 2025-05-30 00:58:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:28.561991 | orchestrator | 2025-05-30 00:58:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:28.562647 | orchestrator | 2025-05-30 00:58:28 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:58:28.563318 | orchestrator | 2025-05-30 00:58:28 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:58:28.564205 | orchestrator | 2025-05-30 00:58:28 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:28.564256 | orchestrator | 2025-05-30 00:58:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:31.609376 | orchestrator | 2025-05-30 00:58:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:31.610801 | orchestrator | 2025-05-30 00:58:31 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:58:31.611349 | orchestrator | 2025-05-30 00:58:31 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:58:31.612508 | orchestrator | 2025-05-30 00:58:31 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:31.612951 | orchestrator | 2025-05-30 00:58:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:34.648544 | orchestrator | 2025-05-30 00:58:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:34.651625 | orchestrator | 2025-05-30 00:58:34 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:58:34.654523 | orchestrator | 2025-05-30 00:58:34 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:58:34.657026 | orchestrator | 2025-05-30 00:58:34 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:34.657060 | orchestrator | 2025-05-30 00:58:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:37.694767 | orchestrator | 2025-05-30 00:58:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:37.694872 | orchestrator | 2025-05-30 00:58:37 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:58:37.695387 | orchestrator | 2025-05-30 00:58:37 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:58:37.696111 | orchestrator | 2025-05-30 00:58:37 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:37.696133 | orchestrator | 2025-05-30 00:58:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:40.742452 | orchestrator | 2025-05-30 00:58:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:40.743278 | orchestrator | 2025-05-30 00:58:40 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:58:40.744358 | orchestrator | 2025-05-30 00:58:40 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:58:40.745567 | orchestrator | 2025-05-30 00:58:40 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:40.745611 | orchestrator | 2025-05-30 00:58:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:43.784062 | orchestrator | 2025-05-30 00:58:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:43.787503 | orchestrator | 2025-05-30 00:58:43 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:58:43.787592 | orchestrator | 2025-05-30 00:58:43 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:58:43.787868 | orchestrator | 2025-05-30 00:58:43 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:43.790302 | orchestrator | 2025-05-30 00:58:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:46.827764 | orchestrator | 2025-05-30 00:58:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:46.827890 | orchestrator | 2025-05-30 00:58:46 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:58:46.828323 | orchestrator | 2025-05-30 00:58:46 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:58:46.829128 | orchestrator | 2025-05-30 00:58:46 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:46.829160 | orchestrator | 2025-05-30 00:58:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:49.873232 | orchestrator | 2025-05-30 00:58:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:49.875496 | orchestrator | 2025-05-30 00:58:49 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:58:49.878416 | orchestrator | 2025-05-30 00:58:49 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:58:49.879697 | orchestrator | 2025-05-30 00:58:49 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:49.880313 | orchestrator | 2025-05-30 00:58:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:52.929401 | orchestrator | 2025-05-30 00:58:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:52.929498 | orchestrator | 2025-05-30 00:58:52 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:58:52.929514 | orchestrator | 2025-05-30 00:58:52 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:58:52.929526 | orchestrator | 2025-05-30 00:58:52 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:52.929537 | orchestrator | 2025-05-30 00:58:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:55.955470 | orchestrator | 2025-05-30 00:58:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:55.955665 | orchestrator | 2025-05-30 00:58:55 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:58:55.957312 | orchestrator | 2025-05-30 00:58:55 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:58:55.959030 | orchestrator | 2025-05-30 00:58:55 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:55.961016 | orchestrator | 2025-05-30 00:58:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:58:58.989599 | orchestrator | 2025-05-30 00:58:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:58:58.989712 | orchestrator | 2025-05-30 00:58:58 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:58:58.989729 | orchestrator | 2025-05-30 00:58:58 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:58:58.989742 | orchestrator | 2025-05-30 00:58:58 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:58:58.989754 | orchestrator | 2025-05-30 00:58:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:02.032219 | orchestrator | 2025-05-30 00:59:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:02.032333 | orchestrator | 2025-05-30 00:59:02 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:02.032677 | orchestrator | 2025-05-30 00:59:02 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:02.033361 | orchestrator | 2025-05-30 00:59:02 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:02.033384 | orchestrator | 2025-05-30 00:59:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:05.058910 | orchestrator | 2025-05-30 00:59:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:05.059079 | orchestrator | 2025-05-30 00:59:05 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:05.059392 | orchestrator | 2025-05-30 00:59:05 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:05.060036 | orchestrator | 2025-05-30 00:59:05 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:05.060062 | orchestrator | 2025-05-30 00:59:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:08.102784 | orchestrator | 2025-05-30 00:59:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:08.103595 | orchestrator | 2025-05-30 00:59:08 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:08.107452 | orchestrator | 2025-05-30 00:59:08 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:08.113247 | orchestrator | 2025-05-30 00:59:08 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:08.113295 | orchestrator | 2025-05-30 00:59:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:11.163353 | orchestrator | 2025-05-30 00:59:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:11.166363 | orchestrator | 2025-05-30 00:59:11 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:11.169091 | orchestrator | 2025-05-30 00:59:11 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:11.170579 | orchestrator | 2025-05-30 00:59:11 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:11.170621 | orchestrator | 2025-05-30 00:59:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:14.225230 | orchestrator | 2025-05-30 00:59:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:14.227732 | orchestrator | 2025-05-30 00:59:14 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:14.231146 | orchestrator | 2025-05-30 00:59:14 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:14.232735 | orchestrator | 2025-05-30 00:59:14 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:14.232839 | orchestrator | 2025-05-30 00:59:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:17.288436 | orchestrator | 2025-05-30 00:59:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:17.291323 | orchestrator | 2025-05-30 00:59:17 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:17.291368 | orchestrator | 2025-05-30 00:59:17 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:17.291779 | orchestrator | 2025-05-30 00:59:17 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:17.291992 | orchestrator | 2025-05-30 00:59:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:20.340687 | orchestrator | 2025-05-30 00:59:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:20.341081 | orchestrator | 2025-05-30 00:59:20 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:20.341968 | orchestrator | 2025-05-30 00:59:20 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:20.343028 | orchestrator | 2025-05-30 00:59:20 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:20.343050 | orchestrator | 2025-05-30 00:59:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:23.396114 | orchestrator | 2025-05-30 00:59:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:23.397160 | orchestrator | 2025-05-30 00:59:23 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:23.398417 | orchestrator | 2025-05-30 00:59:23 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:23.399647 | orchestrator | 2025-05-30 00:59:23 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:23.399676 | orchestrator | 2025-05-30 00:59:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:26.442835 | orchestrator | 2025-05-30 00:59:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:26.443006 | orchestrator | 2025-05-30 00:59:26 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:26.444150 | orchestrator | 2025-05-30 00:59:26 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:26.444176 | orchestrator | 2025-05-30 00:59:26 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:26.444188 | orchestrator | 2025-05-30 00:59:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:29.501754 | orchestrator | 2025-05-30 00:59:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:29.502889 | orchestrator | 2025-05-30 00:59:29 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:29.504067 | orchestrator | 2025-05-30 00:59:29 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:29.505131 | orchestrator | 2025-05-30 00:59:29 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:29.505337 | orchestrator | 2025-05-30 00:59:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:32.543381 | orchestrator | 2025-05-30 00:59:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:32.544696 | orchestrator | 2025-05-30 00:59:32 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:32.545814 | orchestrator | 2025-05-30 00:59:32 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:32.546117 | orchestrator | 2025-05-30 00:59:32 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:32.546229 | orchestrator | 2025-05-30 00:59:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:35.596462 | orchestrator | 2025-05-30 00:59:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:35.597602 | orchestrator | 2025-05-30 00:59:35 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:35.599319 | orchestrator | 2025-05-30 00:59:35 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:35.601036 | orchestrator | 2025-05-30 00:59:35 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:35.601075 | orchestrator | 2025-05-30 00:59:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:38.650918 | orchestrator | 2025-05-30 00:59:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:38.652776 | orchestrator | 2025-05-30 00:59:38 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:38.654635 | orchestrator | 2025-05-30 00:59:38 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:38.657694 | orchestrator | 2025-05-30 00:59:38 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:38.658333 | orchestrator | 2025-05-30 00:59:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:41.707991 | orchestrator | 2025-05-30 00:59:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:41.709548 | orchestrator | 2025-05-30 00:59:41 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:41.711388 | orchestrator | 2025-05-30 00:59:41 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:41.713819 | orchestrator | 2025-05-30 00:59:41 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:41.713885 | orchestrator | 2025-05-30 00:59:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:44.764365 | orchestrator | 2025-05-30 00:59:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:44.765533 | orchestrator | 2025-05-30 00:59:44 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:44.766298 | orchestrator | 2025-05-30 00:59:44 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:44.767374 | orchestrator | 2025-05-30 00:59:44 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:44.767747 | orchestrator | 2025-05-30 00:59:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:47.822482 | orchestrator | 2025-05-30 00:59:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:47.823992 | orchestrator | 2025-05-30 00:59:47 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:47.825575 | orchestrator | 2025-05-30 00:59:47 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:47.826548 | orchestrator | 2025-05-30 00:59:47 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:47.826576 | orchestrator | 2025-05-30 00:59:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:50.886563 | orchestrator | 2025-05-30 00:59:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:50.887602 | orchestrator | 2025-05-30 00:59:50 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:50.889356 | orchestrator | 2025-05-30 00:59:50 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:50.892171 | orchestrator | 2025-05-30 00:59:50 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:50.892209 | orchestrator | 2025-05-30 00:59:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:53.943187 | orchestrator | 2025-05-30 00:59:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:53.945227 | orchestrator | 2025-05-30 00:59:53 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:53.946816 | orchestrator | 2025-05-30 00:59:53 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:53.948534 | orchestrator | 2025-05-30 00:59:53 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:53.948565 | orchestrator | 2025-05-30 00:59:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 00:59:56.997146 | orchestrator | 2025-05-30 00:59:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 00:59:56.998167 | orchestrator | 2025-05-30 00:59:56 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 00:59:56.999492 | orchestrator | 2025-05-30 00:59:56 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 00:59:57.000953 | orchestrator | 2025-05-30 00:59:56 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 00:59:57.000979 | orchestrator | 2025-05-30 00:59:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:00.053072 | orchestrator | 2025-05-30 01:00:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:00.053991 | orchestrator | 2025-05-30 01:00:00 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:00.055848 | orchestrator | 2025-05-30 01:00:00 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 01:00:00.057696 | orchestrator | 2025-05-30 01:00:00 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 01:00:00.057724 | orchestrator | 2025-05-30 01:00:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:03.107567 | orchestrator | 2025-05-30 01:00:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:03.108848 | orchestrator | 2025-05-30 01:00:03 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:03.111226 | orchestrator | 2025-05-30 01:00:03 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 01:00:03.114422 | orchestrator | 2025-05-30 01:00:03 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 01:00:03.114457 | orchestrator | 2025-05-30 01:00:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:06.168867 | orchestrator | 2025-05-30 01:00:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:06.170666 | orchestrator | 2025-05-30 01:00:06 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:06.172911 | orchestrator | 2025-05-30 01:00:06 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state STARTED 2025-05-30 01:00:06.175227 | orchestrator | 2025-05-30 01:00:06 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 01:00:06.175271 | orchestrator | 2025-05-30 01:00:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:09.236104 | orchestrator | 2025-05-30 01:00:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:09.237406 | orchestrator | 2025-05-30 01:00:09 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:09.239530 | orchestrator | 2025-05-30 01:00:09 | INFO  | Task 889b7f25-c297-4f98-aada-1d60ae3420da is in state SUCCESS 2025-05-30 01:00:09.241606 | orchestrator | 2025-05-30 01:00:09.241647 | orchestrator | 2025-05-30 01:00:09.241660 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:00:09.241671 | orchestrator | 2025-05-30 01:00:09.241683 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:00:09.241694 | orchestrator | Friday 30 May 2025 00:58:26 +0000 (0:00:00.305) 0:00:00.305 ************ 2025-05-30 01:00:09.241705 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:00:09.241718 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:00:09.241729 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:00:09.241740 | orchestrator | 2025-05-30 01:00:09.241751 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:00:09.241762 | orchestrator | Friday 30 May 2025 00:58:27 +0000 (0:00:00.423) 0:00:00.729 ************ 2025-05-30 01:00:09.241773 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-30 01:00:09.241784 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-30 01:00:09.241795 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-30 01:00:09.241806 | orchestrator | 2025-05-30 01:00:09.241817 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-30 01:00:09.241827 | orchestrator | 2025-05-30 01:00:09.241838 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-30 01:00:09.241849 | orchestrator | Friday 30 May 2025 00:58:27 +0000 (0:00:00.335) 0:00:01.064 ************ 2025-05-30 01:00:09.241860 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:00:09.241895 | orchestrator | 2025-05-30 01:00:09.241907 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-30 01:00:09.241942 | orchestrator | Friday 30 May 2025 00:58:28 +0000 (0:00:00.706) 0:00:01.771 ************ 2025-05-30 01:00:09.241974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-30 01:00:09.242010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-30 01:00:09.242092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-30 01:00:09.242107 | orchestrator | 2025-05-30 01:00:09.242118 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-30 01:00:09.242129 | orchestrator | Friday 30 May 2025 00:58:29 +0000 (0:00:01.675) 0:00:03.447 ************ 2025-05-30 01:00:09.242140 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:00:09.242151 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:00:09.242162 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:00:09.242173 | orchestrator | 2025-05-30 01:00:09.242183 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-30 01:00:09.242194 | orchestrator | Friday 30 May 2025 00:58:30 +0000 (0:00:00.277) 0:00:03.724 ************ 2025-05-30 01:00:09.242213 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-30 01:00:09.242228 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-30 01:00:09.242240 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-30 01:00:09.242253 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-30 01:00:09.242267 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-30 01:00:09.242279 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-30 01:00:09.242292 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-30 01:00:09.242313 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-30 01:00:09.242326 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-30 01:00:09.242338 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-30 01:00:09.242351 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-30 01:00:09.242364 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-30 01:00:09.242377 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-30 01:00:09.242389 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-30 01:00:09.242402 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-30 01:00:09.242415 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-30 01:00:09.242427 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-30 01:00:09.242441 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-30 01:00:09.242459 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-30 01:00:09.242473 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-30 01:00:09.242486 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-30 01:00:09.242501 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-30 01:00:09.242516 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-30 01:00:09.242528 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-30 01:00:09.242542 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-30 01:00:09.242555 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-30 01:00:09.242569 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-30 01:00:09.242581 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-30 01:00:09.242592 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-30 01:00:09.242602 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-30 01:00:09.242613 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-30 01:00:09.242624 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-30 01:00:09.242634 | orchestrator | 2025-05-30 01:00:09.242645 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-30 01:00:09.242656 | orchestrator | Friday 30 May 2025 00:58:31 +0000 (0:00:00.981) 0:00:04.706 ************ 2025-05-30 01:00:09.242667 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:00:09.242684 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:00:09.242695 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:00:09.242706 | orchestrator | 2025-05-30 01:00:09.242717 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-30 01:00:09.242728 | orchestrator | Friday 30 May 2025 00:58:31 +0000 (0:00:00.445) 0:00:05.151 ************ 2025-05-30 01:00:09.242738 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.242750 | orchestrator | 2025-05-30 01:00:09.242766 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-30 01:00:09.242777 | orchestrator | Friday 30 May 2025 00:58:31 +0000 (0:00:00.131) 0:00:05.283 ************ 2025-05-30 01:00:09.242788 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.242798 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.242809 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.242820 | orchestrator | 2025-05-30 01:00:09.242830 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-30 01:00:09.242841 | orchestrator | Friday 30 May 2025 00:58:32 +0000 (0:00:00.281) 0:00:05.565 ************ 2025-05-30 01:00:09.242852 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:00:09.242862 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:00:09.242890 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:00:09.242901 | orchestrator | 2025-05-30 01:00:09.242912 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-30 01:00:09.242923 | orchestrator | Friday 30 May 2025 00:58:32 +0000 (0:00:00.474) 0:00:06.039 ************ 2025-05-30 01:00:09.242933 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.242944 | orchestrator | 2025-05-30 01:00:09.242955 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-30 01:00:09.242965 | orchestrator | Friday 30 May 2025 00:58:32 +0000 (0:00:00.106) 0:00:06.146 ************ 2025-05-30 01:00:09.242976 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.242987 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.242998 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.243008 | orchestrator | 2025-05-30 01:00:09.243019 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-30 01:00:09.243030 | orchestrator | Friday 30 May 2025 00:58:33 +0000 (0:00:00.445) 0:00:06.591 ************ 2025-05-30 01:00:09.243041 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:00:09.243052 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:00:09.243063 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:00:09.243073 | orchestrator | 2025-05-30 01:00:09.243084 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-30 01:00:09.243095 | orchestrator | Friday 30 May 2025 00:58:33 +0000 (0:00:00.435) 0:00:07.026 ************ 2025-05-30 01:00:09.243106 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.243117 | orchestrator | 2025-05-30 01:00:09.243127 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-30 01:00:09.243139 | orchestrator | Friday 30 May 2025 00:58:33 +0000 (0:00:00.145) 0:00:07.172 ************ 2025-05-30 01:00:09.243149 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.243161 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.243171 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.243182 | orchestrator | 2025-05-30 01:00:09.243193 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-30 01:00:09.243204 | orchestrator | Friday 30 May 2025 00:58:34 +0000 (0:00:00.406) 0:00:07.578 ************ 2025-05-30 01:00:09.243215 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:00:09.243226 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:00:09.243236 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:00:09.243247 | orchestrator | 2025-05-30 01:00:09.243258 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-30 01:00:09.243269 | orchestrator | Friday 30 May 2025 00:58:34 +0000 (0:00:00.450) 0:00:08.029 ************ 2025-05-30 01:00:09.243279 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.243290 | orchestrator | 2025-05-30 01:00:09.243301 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-30 01:00:09.243318 | orchestrator | Friday 30 May 2025 00:58:34 +0000 (0:00:00.127) 0:00:08.157 ************ 2025-05-30 01:00:09.243329 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.243340 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.243351 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.243362 | orchestrator | 2025-05-30 01:00:09.243373 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-30 01:00:09.243383 | orchestrator | Friday 30 May 2025 00:58:35 +0000 (0:00:00.480) 0:00:08.637 ************ 2025-05-30 01:00:09.243394 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:00:09.243405 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:00:09.243416 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:00:09.243427 | orchestrator | 2025-05-30 01:00:09.243438 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-30 01:00:09.243448 | orchestrator | Friday 30 May 2025 00:58:35 +0000 (0:00:00.313) 0:00:08.951 ************ 2025-05-30 01:00:09.243459 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.243470 | orchestrator | 2025-05-30 01:00:09.243481 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-30 01:00:09.243491 | orchestrator | Friday 30 May 2025 00:58:35 +0000 (0:00:00.257) 0:00:09.208 ************ 2025-05-30 01:00:09.243502 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.243513 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.243523 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.243534 | orchestrator | 2025-05-30 01:00:09.243545 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-30 01:00:09.243556 | orchestrator | Friday 30 May 2025 00:58:36 +0000 (0:00:00.278) 0:00:09.486 ************ 2025-05-30 01:00:09.243566 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:00:09.243577 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:00:09.243588 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:00:09.243598 | orchestrator | 2025-05-30 01:00:09.243609 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-30 01:00:09.243620 | orchestrator | Friday 30 May 2025 00:58:36 +0000 (0:00:00.607) 0:00:10.094 ************ 2025-05-30 01:00:09.243631 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.243641 | orchestrator | 2025-05-30 01:00:09.243652 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-30 01:00:09.243663 | orchestrator | Friday 30 May 2025 00:58:36 +0000 (0:00:00.139) 0:00:10.234 ************ 2025-05-30 01:00:09.243674 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.243684 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.243695 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.243706 | orchestrator | 2025-05-30 01:00:09.243716 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-30 01:00:09.243727 | orchestrator | Friday 30 May 2025 00:58:37 +0000 (0:00:00.582) 0:00:10.816 ************ 2025-05-30 01:00:09.243744 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:00:09.243755 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:00:09.243766 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:00:09.243777 | orchestrator | 2025-05-30 01:00:09.243787 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-30 01:00:09.243798 | orchestrator | Friday 30 May 2025 00:58:37 +0000 (0:00:00.471) 0:00:11.287 ************ 2025-05-30 01:00:09.243809 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.243820 | orchestrator | 2025-05-30 01:00:09.243830 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-30 01:00:09.243841 | orchestrator | Friday 30 May 2025 00:58:37 +0000 (0:00:00.129) 0:00:11.417 ************ 2025-05-30 01:00:09.243852 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.243863 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.243924 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.243936 | orchestrator | 2025-05-30 01:00:09.243947 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-30 01:00:09.243965 | orchestrator | Friday 30 May 2025 00:58:38 +0000 (0:00:00.550) 0:00:11.967 ************ 2025-05-30 01:00:09.243976 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:00:09.243987 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:00:09.243998 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:00:09.244008 | orchestrator | 2025-05-30 01:00:09.244019 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-30 01:00:09.244030 | orchestrator | Friday 30 May 2025 00:58:38 +0000 (0:00:00.440) 0:00:12.407 ************ 2025-05-30 01:00:09.244041 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.244052 | orchestrator | 2025-05-30 01:00:09.244063 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-30 01:00:09.244107 | orchestrator | Friday 30 May 2025 00:58:39 +0000 (0:00:00.259) 0:00:12.667 ************ 2025-05-30 01:00:09.244119 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.244130 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.244141 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.244151 | orchestrator | 2025-05-30 01:00:09.244161 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-30 01:00:09.244170 | orchestrator | Friday 30 May 2025 00:58:39 +0000 (0:00:00.291) 0:00:12.958 ************ 2025-05-30 01:00:09.244180 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:00:09.244190 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:00:09.244199 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:00:09.244209 | orchestrator | 2025-05-30 01:00:09.244223 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-30 01:00:09.244232 | orchestrator | Friday 30 May 2025 00:58:39 +0000 (0:00:00.441) 0:00:13.400 ************ 2025-05-30 01:00:09.244242 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.244252 | orchestrator | 2025-05-30 01:00:09.244261 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-30 01:00:09.244271 | orchestrator | Friday 30 May 2025 00:58:40 +0000 (0:00:00.131) 0:00:13.532 ************ 2025-05-30 01:00:09.244280 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.244290 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.244299 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.244309 | orchestrator | 2025-05-30 01:00:09.244318 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-30 01:00:09.244328 | orchestrator | Friday 30 May 2025 00:58:40 +0000 (0:00:00.408) 0:00:13.941 ************ 2025-05-30 01:00:09.244338 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:00:09.244347 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:00:09.244357 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:00:09.244366 | orchestrator | 2025-05-30 01:00:09.244376 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-30 01:00:09.244385 | orchestrator | Friday 30 May 2025 00:58:40 +0000 (0:00:00.422) 0:00:14.363 ************ 2025-05-30 01:00:09.244395 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.244404 | orchestrator | 2025-05-30 01:00:09.244414 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-30 01:00:09.244423 | orchestrator | Friday 30 May 2025 00:58:41 +0000 (0:00:00.124) 0:00:14.488 ************ 2025-05-30 01:00:09.244433 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.244442 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.244452 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.244461 | orchestrator | 2025-05-30 01:00:09.244471 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-30 01:00:09.244481 | orchestrator | Friday 30 May 2025 00:58:41 +0000 (0:00:00.423) 0:00:14.911 ************ 2025-05-30 01:00:09.244490 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:00:09.244511 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:00:09.244522 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:00:09.244531 | orchestrator | 2025-05-30 01:00:09.244541 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-30 01:00:09.244550 | orchestrator | Friday 30 May 2025 00:58:42 +0000 (0:00:00.629) 0:00:15.540 ************ 2025-05-30 01:00:09.244566 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.244576 | orchestrator | 2025-05-30 01:00:09.244585 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-30 01:00:09.244595 | orchestrator | Friday 30 May 2025 00:58:42 +0000 (0:00:00.160) 0:00:15.701 ************ 2025-05-30 01:00:09.244605 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.244614 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.244624 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.244633 | orchestrator | 2025-05-30 01:00:09.244643 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-30 01:00:09.244652 | orchestrator | Friday 30 May 2025 00:58:42 +0000 (0:00:00.568) 0:00:16.269 ************ 2025-05-30 01:00:09.244662 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:00:09.244671 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:00:09.244681 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:00:09.244690 | orchestrator | 2025-05-30 01:00:09.244700 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-30 01:00:09.244710 | orchestrator | Friday 30 May 2025 00:58:46 +0000 (0:00:03.261) 0:00:19.530 ************ 2025-05-30 01:00:09.244719 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-30 01:00:09.244735 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-30 01:00:09.244745 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-30 01:00:09.244754 | orchestrator | 2025-05-30 01:00:09.244764 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-30 01:00:09.244774 | orchestrator | Friday 30 May 2025 00:58:50 +0000 (0:00:04.198) 0:00:23.729 ************ 2025-05-30 01:00:09.244783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-30 01:00:09.244793 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-30 01:00:09.244803 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-30 01:00:09.244812 | orchestrator | 2025-05-30 01:00:09.244822 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-30 01:00:09.244832 | orchestrator | Friday 30 May 2025 00:58:53 +0000 (0:00:03.195) 0:00:26.925 ************ 2025-05-30 01:00:09.244841 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-30 01:00:09.244851 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-30 01:00:09.244860 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-30 01:00:09.244915 | orchestrator | 2025-05-30 01:00:09.244927 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-30 01:00:09.244937 | orchestrator | Friday 30 May 2025 00:58:55 +0000 (0:00:01.857) 0:00:28.782 ************ 2025-05-30 01:00:09.244946 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.244956 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.244966 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.244975 | orchestrator | 2025-05-30 01:00:09.244985 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-30 01:00:09.244995 | orchestrator | Friday 30 May 2025 00:58:55 +0000 (0:00:00.339) 0:00:29.122 ************ 2025-05-30 01:00:09.245005 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.245020 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.245030 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.245039 | orchestrator | 2025-05-30 01:00:09.245049 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-30 01:00:09.245059 | orchestrator | Friday 30 May 2025 00:58:55 +0000 (0:00:00.293) 0:00:29.415 ************ 2025-05-30 01:00:09.245075 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:00:09.245085 | orchestrator | 2025-05-30 01:00:09.245094 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-30 01:00:09.245104 | orchestrator | Friday 30 May 2025 00:58:56 +0000 (0:00:00.531) 0:00:29.946 ************ 2025-05-30 01:00:09.245122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-30 01:00:09.245147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-30 01:00:09.245173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-30 01:00:09.245185 | orchestrator | 2025-05-30 01:00:09.245195 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-30 01:00:09.245205 | orchestrator | Friday 30 May 2025 00:58:58 +0000 (0:00:01.569) 0:00:31.516 ************ 2025-05-30 01:00:09.245221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-30 01:00:09.245238 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.245256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-30 01:00:09.245266 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.245279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-30 01:00:09.245294 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.245302 | orchestrator | 2025-05-30 01:00:09.245310 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-30 01:00:09.245318 | orchestrator | Friday 30 May 2025 00:58:58 +0000 (0:00:00.692) 0:00:32.208 ************ 2025-05-30 01:00:09.245332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-30 01:00:09.245347 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.245359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-30 01:00:09.245369 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.245389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-30 01:00:09.245403 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.245412 | orchestrator | 2025-05-30 01:00:09.245420 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-30 01:00:09.245428 | orchestrator | Friday 30 May 2025 00:58:59 +0000 (0:00:01.164) 0:00:33.373 ************ 2025-05-30 01:00:09.245441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-30 01:00:09.245455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-30 01:00:09.245475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-30 01:00:09.245485 | orchestrator | 2025-05-30 01:00:09.245493 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-30 01:00:09.245501 | orchestrator | Friday 30 May 2025 00:59:04 +0000 (0:00:05.010) 0:00:38.384 ************ 2025-05-30 01:00:09.245509 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:00:09.245517 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:00:09.245525 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:00:09.245533 | orchestrator | 2025-05-30 01:00:09.245541 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-30 01:00:09.245554 | orchestrator | Friday 30 May 2025 00:59:05 +0000 (0:00:00.395) 0:00:38.779 ************ 2025-05-30 01:00:09.245562 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:00:09.245570 | orchestrator | 2025-05-30 01:00:09.245578 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-30 01:00:09.245585 | orchestrator | Friday 30 May 2025 00:59:05 +0000 (0:00:00.496) 0:00:39.276 ************ 2025-05-30 01:00:09.245593 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:00:09.245601 | orchestrator | 2025-05-30 01:00:09.245609 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-30 01:00:09.245617 | orchestrator | Friday 30 May 2025 00:59:08 +0000 (0:00:02.534) 0:00:41.811 ************ 2025-05-30 01:00:09.245625 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:00:09.245632 | orchestrator | 2025-05-30 01:00:09.245640 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-30 01:00:09.245648 | orchestrator | Friday 30 May 2025 00:59:10 +0000 (0:00:02.303) 0:00:44.114 ************ 2025-05-30 01:00:09.245656 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:00:09.245664 | orchestrator | 2025-05-30 01:00:09.245672 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-30 01:00:09.245683 | orchestrator | Friday 30 May 2025 00:59:24 +0000 (0:00:13.984) 0:00:58.099 ************ 2025-05-30 01:00:09.245691 | orchestrator | 2025-05-30 01:00:09.245699 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-30 01:00:09.245707 | orchestrator | Friday 30 May 2025 00:59:24 +0000 (0:00:00.068) 0:00:58.167 ************ 2025-05-30 01:00:09.245715 | orchestrator | 2025-05-30 01:00:09.245723 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-30 01:00:09.245731 | orchestrator | Friday 30 May 2025 00:59:24 +0000 (0:00:00.193) 0:00:58.361 ************ 2025-05-30 01:00:09.245739 | orchestrator | 2025-05-30 01:00:09.245747 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-30 01:00:09.245755 | orchestrator | Friday 30 May 2025 00:59:24 +0000 (0:00:00.058) 0:00:58.419 ************ 2025-05-30 01:00:09.245762 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:00:09.245770 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:00:09.245778 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:00:09.245786 | orchestrator | 2025-05-30 01:00:09.245794 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:00:09.245802 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-30 01:00:09.245810 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-30 01:00:09.245818 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-30 01:00:09.245826 | orchestrator | 2025-05-30 01:00:09.245834 | orchestrator | 2025-05-30 01:00:09.245842 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:00:09.245850 | orchestrator | Friday 30 May 2025 01:00:07 +0000 (0:00:42.388) 0:01:40.808 ************ 2025-05-30 01:00:09.245858 | orchestrator | =============================================================================== 2025-05-30 01:00:09.245865 | orchestrator | horizon : Restart horizon container ------------------------------------ 42.39s 2025-05-30 01:00:09.245890 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 13.98s 2025-05-30 01:00:09.245898 | orchestrator | horizon : Deploy horizon container -------------------------------------- 5.01s 2025-05-30 01:00:09.245906 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 4.20s 2025-05-30 01:00:09.245914 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.26s 2025-05-30 01:00:09.245921 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.20s 2025-05-30 01:00:09.245937 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.53s 2025-05-30 01:00:09.245945 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.30s 2025-05-30 01:00:09.245953 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.86s 2025-05-30 01:00:09.245961 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.68s 2025-05-30 01:00:09.245969 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.57s 2025-05-30 01:00:09.245977 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.16s 2025-05-30 01:00:09.245985 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.98s 2025-05-30 01:00:09.245997 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2025-05-30 01:00:09.246005 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.69s 2025-05-30 01:00:09.246013 | orchestrator | horizon : Update policy file name --------------------------------------- 0.63s 2025-05-30 01:00:09.246061 | orchestrator | horizon : Update policy file name --------------------------------------- 0.61s 2025-05-30 01:00:09.246069 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.58s 2025-05-30 01:00:09.246077 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.57s 2025-05-30 01:00:09.246085 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2025-05-30 01:00:09.246093 | orchestrator | 2025-05-30 01:00:09 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 01:00:09.246101 | orchestrator | 2025-05-30 01:00:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:12.297657 | orchestrator | 2025-05-30 01:00:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:12.298254 | orchestrator | 2025-05-30 01:00:12 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:12.305905 | orchestrator | 2025-05-30 01:00:12 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 01:00:12.305950 | orchestrator | 2025-05-30 01:00:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:15.362426 | orchestrator | 2025-05-30 01:00:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:15.363058 | orchestrator | 2025-05-30 01:00:15 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:15.363415 | orchestrator | 2025-05-30 01:00:15 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 01:00:15.363470 | orchestrator | 2025-05-30 01:00:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:18.409908 | orchestrator | 2025-05-30 01:00:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:18.411244 | orchestrator | 2025-05-30 01:00:18 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:18.412718 | orchestrator | 2025-05-30 01:00:18 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 01:00:18.412745 | orchestrator | 2025-05-30 01:00:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:21.461922 | orchestrator | 2025-05-30 01:00:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:21.462396 | orchestrator | 2025-05-30 01:00:21 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:21.463898 | orchestrator | 2025-05-30 01:00:21 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 01:00:21.464152 | orchestrator | 2025-05-30 01:00:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:24.517199 | orchestrator | 2025-05-30 01:00:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:24.519936 | orchestrator | 2025-05-30 01:00:24 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:24.522766 | orchestrator | 2025-05-30 01:00:24 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 01:00:24.522963 | orchestrator | 2025-05-30 01:00:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:27.569625 | orchestrator | 2025-05-30 01:00:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:27.570992 | orchestrator | 2025-05-30 01:00:27 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:27.572440 | orchestrator | 2025-05-30 01:00:27 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state STARTED 2025-05-30 01:00:27.572485 | orchestrator | 2025-05-30 01:00:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:30.629961 | orchestrator | 2025-05-30 01:00:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:30.631316 | orchestrator | 2025-05-30 01:00:30 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:30.633272 | orchestrator | 2025-05-30 01:00:30 | INFO  | Task 1abff583-7731-4378-bb99-d326715f8083 is in state SUCCESS 2025-05-30 01:00:30.635001 | orchestrator | 2025-05-30 01:00:30.635169 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-30 01:00:30.635433 | orchestrator | 2025-05-30 01:00:30.635450 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-30 01:00:30.635462 | orchestrator | 2025-05-30 01:00:30.635473 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-30 01:00:30.635485 | orchestrator | Friday 30 May 2025 00:58:21 +0000 (0:00:01.180) 0:00:01.180 ************ 2025-05-30 01:00:30.635496 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 01:00:30.635525 | orchestrator | 2025-05-30 01:00:30.635537 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-30 01:00:30.635549 | orchestrator | Friday 30 May 2025 00:58:22 +0000 (0:00:00.527) 0:00:01.707 ************ 2025-05-30 01:00:30.635560 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-30 01:00:30.635571 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-30 01:00:30.635582 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-30 01:00:30.635593 | orchestrator | 2025-05-30 01:00:30.635604 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-30 01:00:30.635615 | orchestrator | Friday 30 May 2025 00:58:23 +0000 (0:00:00.804) 0:00:02.512 ************ 2025-05-30 01:00:30.635627 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 01:00:30.635638 | orchestrator | 2025-05-30 01:00:30.635648 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-30 01:00:30.635659 | orchestrator | Friday 30 May 2025 00:58:23 +0000 (0:00:00.726) 0:00:03.238 ************ 2025-05-30 01:00:30.635670 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.635682 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.635692 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.635703 | orchestrator | 2025-05-30 01:00:30.635714 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-30 01:00:30.635725 | orchestrator | Friday 30 May 2025 00:58:24 +0000 (0:00:00.686) 0:00:03.925 ************ 2025-05-30 01:00:30.635736 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.635747 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.635783 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.635794 | orchestrator | 2025-05-30 01:00:30.635805 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-30 01:00:30.635816 | orchestrator | Friday 30 May 2025 00:58:24 +0000 (0:00:00.278) 0:00:04.204 ************ 2025-05-30 01:00:30.635858 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.635882 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.635894 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.635904 | orchestrator | 2025-05-30 01:00:30.635915 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-30 01:00:30.635926 | orchestrator | Friday 30 May 2025 00:58:25 +0000 (0:00:00.845) 0:00:05.049 ************ 2025-05-30 01:00:30.635937 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.635948 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.635959 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.635969 | orchestrator | 2025-05-30 01:00:30.635980 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-30 01:00:30.635991 | orchestrator | Friday 30 May 2025 00:58:26 +0000 (0:00:00.306) 0:00:05.355 ************ 2025-05-30 01:00:30.636002 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.636012 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.636023 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.636034 | orchestrator | 2025-05-30 01:00:30.636045 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-30 01:00:30.636057 | orchestrator | Friday 30 May 2025 00:58:26 +0000 (0:00:00.330) 0:00:05.686 ************ 2025-05-30 01:00:30.636071 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.636084 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.636096 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.636109 | orchestrator | 2025-05-30 01:00:30.636121 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-30 01:00:30.636132 | orchestrator | Friday 30 May 2025 00:58:26 +0000 (0:00:00.321) 0:00:06.008 ************ 2025-05-30 01:00:30.636143 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.636154 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.636164 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.636175 | orchestrator | 2025-05-30 01:00:30.636186 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-30 01:00:30.636197 | orchestrator | Friday 30 May 2025 00:58:27 +0000 (0:00:00.515) 0:00:06.524 ************ 2025-05-30 01:00:30.636208 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.636219 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.636230 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.636241 | orchestrator | 2025-05-30 01:00:30.636252 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-30 01:00:30.636263 | orchestrator | Friday 30 May 2025 00:58:27 +0000 (0:00:00.300) 0:00:06.825 ************ 2025-05-30 01:00:30.636274 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-30 01:00:30.636284 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 01:00:30.636295 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 01:00:30.636306 | orchestrator | 2025-05-30 01:00:30.636317 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-30 01:00:30.636328 | orchestrator | Friday 30 May 2025 00:58:28 +0000 (0:00:00.664) 0:00:07.489 ************ 2025-05-30 01:00:30.636339 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.636350 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.636361 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.636371 | orchestrator | 2025-05-30 01:00:30.636382 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-30 01:00:30.636393 | orchestrator | Friday 30 May 2025 00:58:28 +0000 (0:00:00.485) 0:00:07.974 ************ 2025-05-30 01:00:30.636415 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-30 01:00:30.636436 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 01:00:30.636447 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 01:00:30.636458 | orchestrator | 2025-05-30 01:00:30.636469 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-30 01:00:30.636479 | orchestrator | Friday 30 May 2025 00:58:30 +0000 (0:00:02.282) 0:00:10.257 ************ 2025-05-30 01:00:30.636490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-30 01:00:30.636502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-30 01:00:30.636512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-30 01:00:30.636523 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.636534 | orchestrator | 2025-05-30 01:00:30.636545 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-30 01:00:30.636556 | orchestrator | Friday 30 May 2025 00:58:31 +0000 (0:00:00.474) 0:00:10.732 ************ 2025-05-30 01:00:30.636568 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-30 01:00:30.636581 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-30 01:00:30.636593 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-30 01:00:30.636604 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.636615 | orchestrator | 2025-05-30 01:00:30.636626 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-30 01:00:30.636636 | orchestrator | Friday 30 May 2025 00:58:32 +0000 (0:00:00.655) 0:00:11.387 ************ 2025-05-30 01:00:30.636654 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-30 01:00:30.636668 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-30 01:00:30.636680 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-30 01:00:30.636691 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.636702 | orchestrator | 2025-05-30 01:00:30.636713 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-30 01:00:30.636724 | orchestrator | Friday 30 May 2025 00:58:32 +0000 (0:00:00.187) 0:00:11.575 ************ 2025-05-30 01:00:30.636737 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '32cc4543507b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-30 00:58:29.518939', 'end': '2025-05-30 00:58:29.562863', 'delta': '0:00:00.043924', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['32cc4543507b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-30 01:00:30.636769 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'e82e8bdc94b8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-30 00:58:30.064792', 'end': '2025-05-30 00:58:30.107426', 'delta': '0:00:00.042634', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e82e8bdc94b8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-30 01:00:30.636783 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '4d5667bd6d83', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-30 00:58:30.624334', 'end': '2025-05-30 00:58:30.664353', 'delta': '0:00:00.040019', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4d5667bd6d83'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-30 01:00:30.636794 | orchestrator | 2025-05-30 01:00:30.636805 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-30 01:00:30.636817 | orchestrator | Friday 30 May 2025 00:58:32 +0000 (0:00:00.198) 0:00:11.774 ************ 2025-05-30 01:00:30.636871 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.636883 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.636894 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.636905 | orchestrator | 2025-05-30 01:00:30.636916 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-30 01:00:30.636927 | orchestrator | Friday 30 May 2025 00:58:32 +0000 (0:00:00.489) 0:00:12.263 ************ 2025-05-30 01:00:30.636937 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-30 01:00:30.636948 | orchestrator | 2025-05-30 01:00:30.636964 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-30 01:00:30.636975 | orchestrator | Friday 30 May 2025 00:58:34 +0000 (0:00:01.363) 0:00:13.627 ************ 2025-05-30 01:00:30.636986 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.636997 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.637008 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.637019 | orchestrator | 2025-05-30 01:00:30.637030 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-30 01:00:30.637040 | orchestrator | Friday 30 May 2025 00:58:34 +0000 (0:00:00.478) 0:00:14.106 ************ 2025-05-30 01:00:30.637051 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.637062 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.637073 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.637084 | orchestrator | 2025-05-30 01:00:30.637094 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-30 01:00:30.637105 | orchestrator | Friday 30 May 2025 00:58:35 +0000 (0:00:00.454) 0:00:14.560 ************ 2025-05-30 01:00:30.637116 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.637127 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.637144 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.637155 | orchestrator | 2025-05-30 01:00:30.637166 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-30 01:00:30.637177 | orchestrator | Friday 30 May 2025 00:58:35 +0000 (0:00:00.299) 0:00:14.859 ************ 2025-05-30 01:00:30.637187 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.637198 | orchestrator | 2025-05-30 01:00:30.637209 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-30 01:00:30.637220 | orchestrator | Friday 30 May 2025 00:58:35 +0000 (0:00:00.129) 0:00:14.989 ************ 2025-05-30 01:00:30.637231 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.637242 | orchestrator | 2025-05-30 01:00:30.637253 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-30 01:00:30.637264 | orchestrator | Friday 30 May 2025 00:58:35 +0000 (0:00:00.228) 0:00:15.217 ************ 2025-05-30 01:00:30.637274 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.637285 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.637296 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.637307 | orchestrator | 2025-05-30 01:00:30.637317 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-30 01:00:30.637328 | orchestrator | Friday 30 May 2025 00:58:36 +0000 (0:00:00.492) 0:00:15.710 ************ 2025-05-30 01:00:30.637339 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.637350 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.637361 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.637371 | orchestrator | 2025-05-30 01:00:30.637382 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-30 01:00:30.637393 | orchestrator | Friday 30 May 2025 00:58:36 +0000 (0:00:00.343) 0:00:16.054 ************ 2025-05-30 01:00:30.637404 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.637415 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.637426 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.637436 | orchestrator | 2025-05-30 01:00:30.637447 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-30 01:00:30.637458 | orchestrator | Friday 30 May 2025 00:58:37 +0000 (0:00:00.357) 0:00:16.411 ************ 2025-05-30 01:00:30.637469 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.637480 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.637497 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.637509 | orchestrator | 2025-05-30 01:00:30.637520 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-30 01:00:30.637531 | orchestrator | Friday 30 May 2025 00:58:37 +0000 (0:00:00.339) 0:00:16.750 ************ 2025-05-30 01:00:30.637542 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.637553 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.637564 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.637574 | orchestrator | 2025-05-30 01:00:30.637585 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-30 01:00:30.637596 | orchestrator | Friday 30 May 2025 00:58:37 +0000 (0:00:00.550) 0:00:17.301 ************ 2025-05-30 01:00:30.637607 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.637618 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.637629 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.637639 | orchestrator | 2025-05-30 01:00:30.637650 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-30 01:00:30.637661 | orchestrator | Friday 30 May 2025 00:58:38 +0000 (0:00:00.318) 0:00:17.620 ************ 2025-05-30 01:00:30.637672 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.637683 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.637694 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.637705 | orchestrator | 2025-05-30 01:00:30.637716 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-30 01:00:30.637727 | orchestrator | Friday 30 May 2025 00:58:38 +0000 (0:00:00.426) 0:00:18.046 ************ 2025-05-30 01:00:30.637745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d0cb66e--f8af--5d02--a2d6--05303feeced3-osd--block--6d0cb66e--f8af--5d02--a2d6--05303feeced3', 'dm-uuid-LVM-6gGdc0okQLoucjNi2S2OddqQDlbW0RvHPpk2V3WdjgwQEu9HjhnqN54cgy7JnKBh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.637762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f43ff32d--4fc4--5ece--8353--26072ce1c913-osd--block--f43ff32d--4fc4--5ece--8353--26072ce1c913', 'dm-uuid-LVM-oVcraZljfeh9epu3EEifpwHyixceNwoqa9zJlkL2TFSDWxfMwlRwPHl0tRcLM9oW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.637775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.637786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.637798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.637810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.637879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.637894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.637905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.637924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.637944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d', 'scsi-SQEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part1', 'scsi-SQEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part14', 'scsi-SQEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part15', 'scsi-SQEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part16', 'scsi-SQEMU_QEMU_HARDDISK_9a6319b3-0c44-4d2f-bfc1-43899b1e392d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.637966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6d0cb66e--f8af--5d02--a2d6--05303feeced3-osd--block--6d0cb66e--f8af--5d02--a2d6--05303feeced3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Wwhvba-QkQ4-dO70-O1Zv-8C8U-YLVi-cGVH2f', 'scsi-0QEMU_QEMU_HARDDISK_5232ed07-4d85-4988-9bc7-7d761a8f0a42', 'scsi-SQEMU_QEMU_HARDDISK_5232ed07-4d85-4988-9bc7-7d761a8f0a42'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.637979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f43ff32d--4fc4--5ece--8353--26072ce1c913-osd--block--f43ff32d--4fc4--5ece--8353--26072ce1c913'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pxjZBb-BbHY-jLWM-Qc7v-fVZn-mI2Q-zDPIaJ', 'scsi-0QEMU_QEMU_HARDDISK_d57cbd6a-67f1-4040-83cf-671f4c3c6a1f', 'scsi-SQEMU_QEMU_HARDDISK_d57cbd6a-67f1-4040-83cf-671f4c3c6a1f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.637997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76f37bde-13ed-44ba-8084-a2417c9798d9', 'scsi-SQEMU_QEMU_HARDDISK_76f37bde-13ed-44ba-8084-a2417c9798d9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.638014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-30-00-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.638077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50b3064c--7478--543e--8abf--661fdbdc95ce-osd--block--50b3064c--7478--543e--8abf--661fdbdc95ce', 'dm-uuid-LVM-clYUmWVvX7ZWgFP0x00l3EywtfJzxZ3QH6v7nuu2S4cO7xXwGwzjv0kUJx1PnkCS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--749c70bc--bf8f--56a3--a425--711d4530659c-osd--block--749c70bc--bf8f--56a3--a425--711d4530659c', 'dm-uuid-LVM-9tcTyJjgk0ux8ZxJM3Z0I5BG0kFkt0svje0LYMMVEH8PRz1Nvle2Fu6f0rm3wc0t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638159 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.638171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51', 'scsi-SQEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part1', 'scsi-SQEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part14', 'scsi-SQEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part15', 'scsi-SQEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part16', 'scsi-SQEMU_QEMU_HARDDISK_62bf4b98-4a21-4975-9c67-1ea56f697b51-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.638255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--50b3064c--7478--543e--8abf--661fdbdc95ce-osd--block--50b3064c--7478--543e--8abf--661fdbdc95ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dMDbY3-Hov1-9Nml-qChJ-wrEc-m9dI-tyWzXY', 'scsi-0QEMU_QEMU_HARDDISK_173bbd31-d008-4662-8aea-7cfb1ab21884', 'scsi-SQEMU_QEMU_HARDDISK_173bbd31-d008-4662-8aea-7cfb1ab21884'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.638271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--749c70bc--bf8f--56a3--a425--711d4530659c-osd--block--749c70bc--bf8f--56a3--a425--711d4530659c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XxD8WH-u833-sOKg-41RQ-ZRE2-o4Sl-UJcJBx', 'scsi-0QEMU_QEMU_HARDDISK_fd28e93c-f7f0-4d71-9af0-3817aadd609f', 'scsi-SQEMU_QEMU_HARDDISK_fd28e93c-f7f0-4d71-9af0-3817aadd609f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.638282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2ff0e7ee--f669--5460--a216--2d1fc13a4a65-osd--block--2ff0e7ee--f669--5460--a216--2d1fc13a4a65', 'dm-uuid-LVM-IKly217p7QCeAB0hTFdCpSZ2HK08iqU6GSmsMOKjgZrcxn43YbP1UbpiR3ETkvpb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fcd55a48-2b4a-45aa-bb97-767fc341b1ef', 'scsi-SQEMU_QEMU_HARDDISK_fcd55a48-2b4a-45aa-bb97-767fc341b1ef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.638308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dfef1ad9--1307--56b8--9770--fa52c7fc01ce-osd--block--dfef1ad9--1307--56b8--9770--fa52c7fc01ce', 'dm-uuid-LVM-KgH2KkzxMOT7QUU348SQZWeoBKbjLTJfQYHob8FVgbG4NbFW7rda7XOde2NimkI9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-30-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.638335 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.638345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:00:30.638446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f', 'scsi-SQEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part1', 'scsi-SQEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part14', 'scsi-SQEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part15', 'scsi-SQEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part16', 'scsi-SQEMU_QEMU_HARDDISK_c29df819-5e55-4aea-aecd-e9fcfd91068f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.638458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2ff0e7ee--f669--5460--a216--2d1fc13a4a65-osd--block--2ff0e7ee--f669--5460--a216--2d1fc13a4a65'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4Pxju3-iWEI-HwrN-dRCG-LMHZ-deMX-U8lGuZ', 'scsi-0QEMU_QEMU_HARDDISK_2529d57e-ffb4-494c-a22f-a2bb1703f8b2', 'scsi-SQEMU_QEMU_HARDDISK_2529d57e-ffb4-494c-a22f-a2bb1703f8b2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.638469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dfef1ad9--1307--56b8--9770--fa52c7fc01ce-osd--block--dfef1ad9--1307--56b8--9770--fa52c7fc01ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RJOnck-dGqo-ezFU-Y50Z-T98W-7F3K-0LWt4p', 'scsi-0QEMU_QEMU_HARDDISK_c7216231-2c47-48eb-b4a1-b98b10008028', 'scsi-SQEMU_QEMU_HARDDISK_c7216231-2c47-48eb-b4a1-b98b10008028'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.638521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8d1e0c18-9aac-4f03-b30e-87512c271b47', 'scsi-SQEMU_QEMU_HARDDISK_8d1e0c18-9aac-4f03-b30e-87512c271b47'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.638539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-30-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:00:30.638549 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.638559 | orchestrator | 2025-05-30 01:00:30.638569 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-30 01:00:30.638579 | orchestrator | Friday 30 May 2025 00:58:39 +0000 (0:00:00.645) 0:00:18.691 ************ 2025-05-30 01:00:30.638589 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-30 01:00:30.638599 | orchestrator | 2025-05-30 01:00:30.638608 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-30 01:00:30.638618 | orchestrator | Friday 30 May 2025 00:58:41 +0000 (0:00:02.296) 0:00:20.987 ************ 2025-05-30 01:00:30.638628 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.638637 | orchestrator | 2025-05-30 01:00:30.638647 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-30 01:00:30.638657 | orchestrator | Friday 30 May 2025 00:58:41 +0000 (0:00:00.200) 0:00:21.188 ************ 2025-05-30 01:00:30.638666 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.638676 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.638686 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.638695 | orchestrator | 2025-05-30 01:00:30.638705 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-30 01:00:30.638715 | orchestrator | Friday 30 May 2025 00:58:42 +0000 (0:00:00.450) 0:00:21.638 ************ 2025-05-30 01:00:30.638724 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.638734 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.638744 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.638753 | orchestrator | 2025-05-30 01:00:30.638767 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-30 01:00:30.638777 | orchestrator | Friday 30 May 2025 00:58:43 +0000 (0:00:00.736) 0:00:22.375 ************ 2025-05-30 01:00:30.638787 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.638797 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.638806 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.638816 | orchestrator | 2025-05-30 01:00:30.638846 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-30 01:00:30.638863 | orchestrator | Friday 30 May 2025 00:58:43 +0000 (0:00:00.386) 0:00:22.761 ************ 2025-05-30 01:00:30.638879 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.638889 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.638898 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.638908 | orchestrator | 2025-05-30 01:00:30.638918 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-30 01:00:30.638927 | orchestrator | Friday 30 May 2025 00:58:44 +0000 (0:00:01.033) 0:00:23.795 ************ 2025-05-30 01:00:30.638937 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.638946 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.638956 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.638975 | orchestrator | 2025-05-30 01:00:30.638985 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-30 01:00:30.638995 | orchestrator | Friday 30 May 2025 00:58:44 +0000 (0:00:00.341) 0:00:24.136 ************ 2025-05-30 01:00:30.639004 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.639014 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.639023 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.639033 | orchestrator | 2025-05-30 01:00:30.639043 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-30 01:00:30.639052 | orchestrator | Friday 30 May 2025 00:58:45 +0000 (0:00:00.458) 0:00:24.594 ************ 2025-05-30 01:00:30.639062 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.639071 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.639081 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.639090 | orchestrator | 2025-05-30 01:00:30.639100 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-30 01:00:30.639109 | orchestrator | Friday 30 May 2025 00:58:45 +0000 (0:00:00.586) 0:00:25.181 ************ 2025-05-30 01:00:30.639119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-30 01:00:30.639129 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-30 01:00:30.639138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-30 01:00:30.639148 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-30 01:00:30.639157 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-30 01:00:30.639167 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-30 01:00:30.639176 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.639186 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-30 01:00:30.639195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-30 01:00:30.639205 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.639214 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-30 01:00:30.639224 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.639233 | orchestrator | 2025-05-30 01:00:30.639243 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-30 01:00:30.639258 | orchestrator | Friday 30 May 2025 00:58:46 +0000 (0:00:00.952) 0:00:26.134 ************ 2025-05-30 01:00:30.639268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-30 01:00:30.639278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-30 01:00:30.639287 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-30 01:00:30.639297 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-30 01:00:30.639306 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-30 01:00:30.639316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-30 01:00:30.639325 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.639335 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-30 01:00:30.639345 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-30 01:00:30.639354 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.639364 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-30 01:00:30.639374 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.639383 | orchestrator | 2025-05-30 01:00:30.639393 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-30 01:00:30.639402 | orchestrator | Friday 30 May 2025 00:58:47 +0000 (0:00:01.037) 0:00:27.172 ************ 2025-05-30 01:00:30.639412 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-30 01:00:30.639422 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-30 01:00:30.639432 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-30 01:00:30.639441 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-30 01:00:30.639457 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-30 01:00:30.639466 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-30 01:00:30.639476 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-30 01:00:30.639486 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-30 01:00:30.639496 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-30 01:00:30.639505 | orchestrator | 2025-05-30 01:00:30.639515 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-30 01:00:30.639524 | orchestrator | Friday 30 May 2025 00:58:49 +0000 (0:00:01.824) 0:00:28.996 ************ 2025-05-30 01:00:30.639534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-30 01:00:30.639543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-30 01:00:30.639553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-30 01:00:30.639563 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.639577 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-30 01:00:30.639587 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-30 01:00:30.639597 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-30 01:00:30.639606 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.639616 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-30 01:00:30.639625 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-30 01:00:30.639634 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-30 01:00:30.639644 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.639653 | orchestrator | 2025-05-30 01:00:30.639663 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-30 01:00:30.639673 | orchestrator | Friday 30 May 2025 00:58:50 +0000 (0:00:00.584) 0:00:29.581 ************ 2025-05-30 01:00:30.639682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-30 01:00:30.639692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-30 01:00:30.639701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-30 01:00:30.639711 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-30 01:00:30.639720 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.639730 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-30 01:00:30.639740 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-30 01:00:30.639749 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.639759 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-30 01:00:30.639768 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-30 01:00:30.639778 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-30 01:00:30.639787 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.639797 | orchestrator | 2025-05-30 01:00:30.639806 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-30 01:00:30.639816 | orchestrator | Friday 30 May 2025 00:58:50 +0000 (0:00:00.463) 0:00:30.044 ************ 2025-05-30 01:00:30.639871 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-30 01:00:30.639882 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-30 01:00:30.639892 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-30 01:00:30.639902 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-30 01:00:30.639912 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-30 01:00:30.639922 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-30 01:00:30.639931 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.639947 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.639957 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-30 01:00:30.639973 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-30 01:00:30.639983 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-30 01:00:30.639992 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.640002 | orchestrator | 2025-05-30 01:00:30.640012 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-30 01:00:30.640021 | orchestrator | Friday 30 May 2025 00:58:51 +0000 (0:00:00.424) 0:00:30.468 ************ 2025-05-30 01:00:30.640031 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 01:00:30.640041 | orchestrator | 2025-05-30 01:00:30.640050 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-30 01:00:30.640060 | orchestrator | Friday 30 May 2025 00:58:51 +0000 (0:00:00.790) 0:00:31.259 ************ 2025-05-30 01:00:30.640070 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.640080 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.640089 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.640098 | orchestrator | 2025-05-30 01:00:30.640108 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-30 01:00:30.640118 | orchestrator | Friday 30 May 2025 00:58:52 +0000 (0:00:00.399) 0:00:31.658 ************ 2025-05-30 01:00:30.640127 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.640137 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.640146 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.640156 | orchestrator | 2025-05-30 01:00:30.640164 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-30 01:00:30.640172 | orchestrator | Friday 30 May 2025 00:58:52 +0000 (0:00:00.312) 0:00:31.970 ************ 2025-05-30 01:00:30.640179 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.640187 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.640195 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.640203 | orchestrator | 2025-05-30 01:00:30.640211 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-30 01:00:30.640219 | orchestrator | Friday 30 May 2025 00:58:52 +0000 (0:00:00.322) 0:00:32.292 ************ 2025-05-30 01:00:30.640227 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.640234 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.640242 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.640250 | orchestrator | 2025-05-30 01:00:30.640258 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-30 01:00:30.640266 | orchestrator | Friday 30 May 2025 00:58:53 +0000 (0:00:00.591) 0:00:32.884 ************ 2025-05-30 01:00:30.640277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 01:00:30.640285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 01:00:30.640293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 01:00:30.640301 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.640309 | orchestrator | 2025-05-30 01:00:30.640317 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-30 01:00:30.640325 | orchestrator | Friday 30 May 2025 00:58:53 +0000 (0:00:00.337) 0:00:33.221 ************ 2025-05-30 01:00:30.640333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 01:00:30.640340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 01:00:30.640348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 01:00:30.640356 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.640364 | orchestrator | 2025-05-30 01:00:30.640371 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-30 01:00:30.640384 | orchestrator | Friday 30 May 2025 00:58:54 +0000 (0:00:00.326) 0:00:33.548 ************ 2025-05-30 01:00:30.640392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 01:00:30.640400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 01:00:30.640408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 01:00:30.640416 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.640424 | orchestrator | 2025-05-30 01:00:30.640431 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 01:00:30.640439 | orchestrator | Friday 30 May 2025 00:58:54 +0000 (0:00:00.323) 0:00:33.871 ************ 2025-05-30 01:00:30.640447 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:00:30.640455 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:00:30.640463 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:00:30.640470 | orchestrator | 2025-05-30 01:00:30.640478 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-30 01:00:30.640486 | orchestrator | Friday 30 May 2025 00:58:54 +0000 (0:00:00.307) 0:00:34.179 ************ 2025-05-30 01:00:30.640494 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-30 01:00:30.640502 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-30 01:00:30.640510 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-30 01:00:30.640517 | orchestrator | 2025-05-30 01:00:30.640525 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-30 01:00:30.640533 | orchestrator | Friday 30 May 2025 00:58:55 +0000 (0:00:00.776) 0:00:34.955 ************ 2025-05-30 01:00:30.640541 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.640549 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.640557 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.640564 | orchestrator | 2025-05-30 01:00:30.640572 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-30 01:00:30.640580 | orchestrator | Friday 30 May 2025 00:58:55 +0000 (0:00:00.263) 0:00:35.218 ************ 2025-05-30 01:00:30.640588 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.640596 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.640603 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.640611 | orchestrator | 2025-05-30 01:00:30.640619 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-30 01:00:30.640631 | orchestrator | Friday 30 May 2025 00:58:56 +0000 (0:00:00.287) 0:00:35.506 ************ 2025-05-30 01:00:30.640639 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-30 01:00:30.640647 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.640655 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-30 01:00:30.640663 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.640671 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-30 01:00:30.640678 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.640686 | orchestrator | 2025-05-30 01:00:30.640694 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-30 01:00:30.640702 | orchestrator | Friday 30 May 2025 00:58:56 +0000 (0:00:00.372) 0:00:35.879 ************ 2025-05-30 01:00:30.640710 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-30 01:00:30.640718 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.640726 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-30 01:00:30.640734 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.640742 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-30 01:00:30.640750 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.640758 | orchestrator | 2025-05-30 01:00:30.640765 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-30 01:00:30.640773 | orchestrator | Friday 30 May 2025 00:58:56 +0000 (0:00:00.408) 0:00:36.287 ************ 2025-05-30 01:00:30.640789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-30 01:00:30.640797 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-30 01:00:30.640805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-30 01:00:30.640812 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-30 01:00:30.640834 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-30 01:00:30.640843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-30 01:00:30.640851 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.640858 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-30 01:00:30.640866 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.640874 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-30 01:00:30.640881 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-30 01:00:30.640893 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.640901 | orchestrator | 2025-05-30 01:00:30.640909 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-30 01:00:30.640917 | orchestrator | Friday 30 May 2025 00:58:57 +0000 (0:00:00.612) 0:00:36.899 ************ 2025-05-30 01:00:30.640925 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.640933 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.640941 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:00:30.640949 | orchestrator | 2025-05-30 01:00:30.640956 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-30 01:00:30.640964 | orchestrator | Friday 30 May 2025 00:58:57 +0000 (0:00:00.264) 0:00:37.164 ************ 2025-05-30 01:00:30.640972 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-30 01:00:30.640980 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 01:00:30.640988 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 01:00:30.640996 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-30 01:00:30.641004 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-30 01:00:30.641012 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-30 01:00:30.641019 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-30 01:00:30.641027 | orchestrator | 2025-05-30 01:00:30.641035 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-30 01:00:30.641043 | orchestrator | Friday 30 May 2025 00:58:58 +0000 (0:00:00.873) 0:00:38.037 ************ 2025-05-30 01:00:30.641051 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-30 01:00:30.641059 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 01:00:30.641067 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 01:00:30.641074 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-30 01:00:30.641082 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-30 01:00:30.641090 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-30 01:00:30.641098 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-30 01:00:30.641106 | orchestrator | 2025-05-30 01:00:30.641114 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-30 01:00:30.641122 | orchestrator | Friday 30 May 2025 00:59:00 +0000 (0:00:01.864) 0:00:39.902 ************ 2025-05-30 01:00:30.641129 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:00:30.641137 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:00:30.641145 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-30 01:00:30.641158 | orchestrator | 2025-05-30 01:00:30.641166 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-30 01:00:30.641178 | orchestrator | Friday 30 May 2025 00:59:01 +0000 (0:00:00.576) 0:00:40.478 ************ 2025-05-30 01:00:30.641187 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-30 01:00:30.641197 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-30 01:00:30.641205 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-30 01:00:30.641213 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-30 01:00:30.641222 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-30 01:00:30.641229 | orchestrator | 2025-05-30 01:00:30.641237 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-30 01:00:30.641245 | orchestrator | Friday 30 May 2025 00:59:41 +0000 (0:00:40.469) 0:01:20.948 ************ 2025-05-30 01:00:30.641253 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641265 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641273 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641281 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641288 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641296 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641304 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-30 01:00:30.641312 | orchestrator | 2025-05-30 01:00:30.641319 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-30 01:00:30.641327 | orchestrator | Friday 30 May 2025 01:00:01 +0000 (0:00:20.081) 0:01:41.030 ************ 2025-05-30 01:00:30.641335 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641343 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641351 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641358 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641366 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641374 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641382 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-30 01:00:30.641390 | orchestrator | 2025-05-30 01:00:30.641397 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-30 01:00:30.641410 | orchestrator | Friday 30 May 2025 01:00:11 +0000 (0:00:09.952) 0:01:50.982 ************ 2025-05-30 01:00:30.641418 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641426 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-30 01:00:30.641434 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-30 01:00:30.641442 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641449 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-30 01:00:30.641457 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-30 01:00:30.641465 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641473 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-30 01:00:30.641481 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-30 01:00:30.641488 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641496 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-30 01:00:30.641508 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-30 01:00:30.641516 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641524 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-30 01:00:30.641532 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-30 01:00:30.641540 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-30 01:00:30.641547 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-30 01:00:30.641555 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-30 01:00:30.641563 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-30 01:00:30.641571 | orchestrator | 2025-05-30 01:00:30.641579 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:00:30.641587 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-30 01:00:30.641595 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-30 01:00:30.641603 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-30 01:00:30.641611 | orchestrator | 2025-05-30 01:00:30.641619 | orchestrator | 2025-05-30 01:00:30.641627 | orchestrator | 2025-05-30 01:00:30.641635 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:00:30.641642 | orchestrator | Friday 30 May 2025 01:00:29 +0000 (0:00:17.984) 0:02:08.966 ************ 2025-05-30 01:00:30.641650 | orchestrator | =============================================================================== 2025-05-30 01:00:30.641658 | orchestrator | create openstack pool(s) ----------------------------------------------- 40.47s 2025-05-30 01:00:30.641666 | orchestrator | generate keys ---------------------------------------------------------- 20.08s 2025-05-30 01:00:30.641674 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.98s 2025-05-30 01:00:30.641681 | orchestrator | get keys from monitors -------------------------------------------------- 9.95s 2025-05-30 01:00:30.641689 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 2.30s 2025-05-30 01:00:30.641700 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.28s 2025-05-30 01:00:30.641708 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.86s 2025-05-30 01:00:30.641721 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.82s 2025-05-30 01:00:30.641729 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.36s 2025-05-30 01:00:30.641737 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 1.04s 2025-05-30 01:00:30.641745 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 1.03s 2025-05-30 01:00:30.641752 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.95s 2025-05-30 01:00:30.641760 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.87s 2025-05-30 01:00:30.641768 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.85s 2025-05-30 01:00:30.641776 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.80s 2025-05-30 01:00:30.641784 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.79s 2025-05-30 01:00:30.641791 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.78s 2025-05-30 01:00:30.641799 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.74s 2025-05-30 01:00:30.641807 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.73s 2025-05-30 01:00:30.641815 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.69s 2025-05-30 01:00:30.641840 | orchestrator | 2025-05-30 01:00:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:33.696652 | orchestrator | 2025-05-30 01:00:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:33.698224 | orchestrator | 2025-05-30 01:00:33 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:33.699982 | orchestrator | 2025-05-30 01:00:33 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state STARTED 2025-05-30 01:00:33.700677 | orchestrator | 2025-05-30 01:00:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:36.749525 | orchestrator | 2025-05-30 01:00:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:36.750687 | orchestrator | 2025-05-30 01:00:36 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:36.751769 | orchestrator | 2025-05-30 01:00:36 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state STARTED 2025-05-30 01:00:36.751791 | orchestrator | 2025-05-30 01:00:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:39.808223 | orchestrator | 2025-05-30 01:00:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:39.809996 | orchestrator | 2025-05-30 01:00:39 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:39.812088 | orchestrator | 2025-05-30 01:00:39 | INFO  | Task 9cd8bde0-5fbc-47f4-bde1-5cfa22744ca4 is in state STARTED 2025-05-30 01:00:39.813696 | orchestrator | 2025-05-30 01:00:39 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state STARTED 2025-05-30 01:00:39.813931 | orchestrator | 2025-05-30 01:00:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:42.875654 | orchestrator | 2025-05-30 01:00:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:42.877912 | orchestrator | 2025-05-30 01:00:42 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:42.879251 | orchestrator | 2025-05-30 01:00:42 | INFO  | Task 9cd8bde0-5fbc-47f4-bde1-5cfa22744ca4 is in state STARTED 2025-05-30 01:00:42.880523 | orchestrator | 2025-05-30 01:00:42 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state STARTED 2025-05-30 01:00:42.880558 | orchestrator | 2025-05-30 01:00:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:45.930121 | orchestrator | 2025-05-30 01:00:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:45.930983 | orchestrator | 2025-05-30 01:00:45 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:45.931911 | orchestrator | 2025-05-30 01:00:45 | INFO  | Task 9cd8bde0-5fbc-47f4-bde1-5cfa22744ca4 is in state STARTED 2025-05-30 01:00:45.933193 | orchestrator | 2025-05-30 01:00:45 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state STARTED 2025-05-30 01:00:45.933234 | orchestrator | 2025-05-30 01:00:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:48.988062 | orchestrator | 2025-05-30 01:00:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:48.989664 | orchestrator | 2025-05-30 01:00:48 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:48.991502 | orchestrator | 2025-05-30 01:00:48 | INFO  | Task 9cd8bde0-5fbc-47f4-bde1-5cfa22744ca4 is in state STARTED 2025-05-30 01:00:48.992882 | orchestrator | 2025-05-30 01:00:48 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state STARTED 2025-05-30 01:00:48.992908 | orchestrator | 2025-05-30 01:00:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:52.037383 | orchestrator | 2025-05-30 01:00:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:52.039980 | orchestrator | 2025-05-30 01:00:52 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:52.040020 | orchestrator | 2025-05-30 01:00:52 | INFO  | Task 9cd8bde0-5fbc-47f4-bde1-5cfa22744ca4 is in state STARTED 2025-05-30 01:00:52.041014 | orchestrator | 2025-05-30 01:00:52 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state STARTED 2025-05-30 01:00:52.041049 | orchestrator | 2025-05-30 01:00:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:55.090289 | orchestrator | 2025-05-30 01:00:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:55.091660 | orchestrator | 2025-05-30 01:00:55 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:55.094231 | orchestrator | 2025-05-30 01:00:55 | INFO  | Task 9cd8bde0-5fbc-47f4-bde1-5cfa22744ca4 is in state STARTED 2025-05-30 01:00:55.095185 | orchestrator | 2025-05-30 01:00:55 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state STARTED 2025-05-30 01:00:55.095227 | orchestrator | 2025-05-30 01:00:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:00:58.139725 | orchestrator | 2025-05-30 01:00:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:00:58.139887 | orchestrator | 2025-05-30 01:00:58 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:00:58.141903 | orchestrator | 2025-05-30 01:00:58 | INFO  | Task 9cd8bde0-5fbc-47f4-bde1-5cfa22744ca4 is in state STARTED 2025-05-30 01:00:58.144368 | orchestrator | 2025-05-30 01:00:58 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state STARTED 2025-05-30 01:00:58.144406 | orchestrator | 2025-05-30 01:00:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:01.190074 | orchestrator | 2025-05-30 01:01:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:01.190678 | orchestrator | 2025-05-30 01:01:01 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state STARTED 2025-05-30 01:01:01.192694 | orchestrator | 2025-05-30 01:01:01 | INFO  | Task 9cd8bde0-5fbc-47f4-bde1-5cfa22744ca4 is in state STARTED 2025-05-30 01:01:01.194059 | orchestrator | 2025-05-30 01:01:01 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state STARTED 2025-05-30 01:01:01.194100 | orchestrator | 2025-05-30 01:01:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:04.249720 | orchestrator | 2025-05-30 01:01:04.249852 | orchestrator | 2025-05-30 01:01:04.249869 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:01:04.249882 | orchestrator | 2025-05-30 01:01:04.249893 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:01:04.249905 | orchestrator | Friday 30 May 2025 00:58:26 +0000 (0:00:00.299) 0:00:00.299 ************ 2025-05-30 01:01:04.249918 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:04.249930 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:01:04.249941 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:01:04.249952 | orchestrator | 2025-05-30 01:01:04.249963 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:01:04.249974 | orchestrator | Friday 30 May 2025 00:58:27 +0000 (0:00:00.449) 0:00:00.748 ************ 2025-05-30 01:01:04.249985 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-30 01:01:04.249996 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-30 01:01:04.250007 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-30 01:01:04.250096 | orchestrator | 2025-05-30 01:01:04.250112 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-30 01:01:04.250124 | orchestrator | 2025-05-30 01:01:04.250200 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-30 01:01:04.250211 | orchestrator | Friday 30 May 2025 00:58:27 +0000 (0:00:00.310) 0:00:01.059 ************ 2025-05-30 01:01:04.250223 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:01:04.250235 | orchestrator | 2025-05-30 01:01:04.250247 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-30 01:01:04.250258 | orchestrator | Friday 30 May 2025 00:58:28 +0000 (0:00:00.879) 0:00:01.938 ************ 2025-05-30 01:01:04.250290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.250309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.250395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.250412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-30 01:01:04.250431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-30 01:01:04.250443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-30 01:01:04.250455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.250467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.250487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.250498 | orchestrator | 2025-05-30 01:01:04.250510 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-30 01:01:04.250528 | orchestrator | Friday 30 May 2025 00:58:30 +0000 (0:00:02.305) 0:00:04.243 ************ 2025-05-30 01:01:04.250540 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-30 01:01:04.250551 | orchestrator | 2025-05-30 01:01:04.250562 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-30 01:01:04.250573 | orchestrator | Friday 30 May 2025 00:58:31 +0000 (0:00:00.571) 0:00:04.815 ************ 2025-05-30 01:01:04.250584 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:04.250595 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:01:04.250606 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:01:04.250617 | orchestrator | 2025-05-30 01:01:04.250628 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-30 01:01:04.250638 | orchestrator | Friday 30 May 2025 00:58:31 +0000 (0:00:00.419) 0:00:05.235 ************ 2025-05-30 01:01:04.250649 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-30 01:01:04.250661 | orchestrator | 2025-05-30 01:01:04.250672 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-30 01:01:04.250683 | orchestrator | Friday 30 May 2025 00:58:32 +0000 (0:00:00.395) 0:00:05.630 ************ 2025-05-30 01:01:04.250694 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:01:04.250705 | orchestrator | 2025-05-30 01:01:04.250716 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-30 01:01:04.250727 | orchestrator | Friday 30 May 2025 00:58:32 +0000 (0:00:00.647) 0:00:06.278 ************ 2025-05-30 01:01:04.250744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.250782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.250813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.250827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-30 01:01:04.250843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-30 01:01:04.250862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-30 01:01:04.250876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.250896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.250910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.250922 | orchestrator | 2025-05-30 01:01:04.250935 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-30 01:01:04.250949 | orchestrator | Friday 30 May 2025 00:58:36 +0000 (0:00:03.125) 0:00:09.403 ************ 2025-05-30 01:01:04.250970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-30 01:01:04.250990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 01:01:04.251005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-30 01:01:04.251027 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.251041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-30 01:01:04.251055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 01:01:04.251077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-30 01:01:04.251090 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:01:04.251109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-30 01:01:04.251123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 01:01:04.251143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-30 01:01:04.251157 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:01:04.251171 | orchestrator | 2025-05-30 01:01:04.251184 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-30 01:01:04.251196 | orchestrator | Friday 30 May 2025 00:58:36 +0000 (0:00:00.841) 0:00:10.245 ************ 2025-05-30 01:01:04.251208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-30 01:01:04.251227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 01:01:04.251239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-30 01:01:04.251251 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.251267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-30 01:01:04.251286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 01:01:04.251298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-30 01:01:04.251309 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:01:04.251328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-30 01:01:04.251341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 01:01:04.251363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-30 01:01:04.251381 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:01:04.251392 | orchestrator | 2025-05-30 01:01:04.251404 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-30 01:01:04.251415 | orchestrator | Friday 30 May 2025 00:58:38 +0000 (0:00:01.259) 0:00:11.504 ************ 2025-05-30 01:01:04.251427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.251439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.251458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.251482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-30 01:01:04.251494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-30 01:01:04.251506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-30 01:01:04.251517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.251529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.251546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.251558 | orchestrator | 2025-05-30 01:01:04.251570 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-30 01:01:04.251587 | orchestrator | Friday 30 May 2025 00:58:41 +0000 (0:00:03.385) 0:00:14.890 ************ 2025-05-30 01:01:04.251604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.251616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 01:01:04.251628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.251640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 01:01:04.251659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.251683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 01:01:04.251695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.251707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.251718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.251729 | orchestrator | 2025-05-30 01:01:04.251740 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-30 01:01:04.251772 | orchestrator | Friday 30 May 2025 00:58:50 +0000 (0:00:08.622) 0:00:23.513 ************ 2025-05-30 01:01:04.251784 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:01:04.251795 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:01:04.251806 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:01:04.251817 | orchestrator | 2025-05-30 01:01:04.251828 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-30 01:01:04.251839 | orchestrator | Friday 30 May 2025 00:58:52 +0000 (0:00:02.623) 0:00:26.136 ************ 2025-05-30 01:01:04.251850 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.251861 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:01:04.251871 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:01:04.251882 | orchestrator | 2025-05-30 01:01:04.251898 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-30 01:01:04.251917 | orchestrator | Friday 30 May 2025 00:58:53 +0000 (0:00:00.899) 0:00:27.036 ************ 2025-05-30 01:01:04.251928 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.251939 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:01:04.251950 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:01:04.251960 | orchestrator | 2025-05-30 01:01:04.251971 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-30 01:01:04.251982 | orchestrator | Friday 30 May 2025 00:58:54 +0000 (0:00:00.518) 0:00:27.554 ************ 2025-05-30 01:01:04.251993 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.252004 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:01:04.252014 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:01:04.252025 | orchestrator | 2025-05-30 01:01:04.252037 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-30 01:01:04.252048 | orchestrator | Friday 30 May 2025 00:58:54 +0000 (0:00:00.339) 0:00:27.893 ************ 2025-05-30 01:01:04.252065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.252077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 01:01:04.252089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.252102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 01:01:04.252127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.252144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-30 01:01:04.252156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.252168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.252179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.252303 | orchestrator | 2025-05-30 01:01:04.252319 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-30 01:01:04.252330 | orchestrator | Friday 30 May 2025 00:58:56 +0000 (0:00:02.305) 0:00:30.198 ************ 2025-05-30 01:01:04.252341 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.252352 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:01:04.252377 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:01:04.252388 | orchestrator | 2025-05-30 01:01:04.252399 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-30 01:01:04.252422 | orchestrator | Friday 30 May 2025 00:58:57 +0000 (0:00:00.438) 0:00:30.637 ************ 2025-05-30 01:01:04.252433 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-30 01:01:04.252445 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-30 01:01:04.252463 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-30 01:01:04.252475 | orchestrator | 2025-05-30 01:01:04.252486 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-30 01:01:04.252497 | orchestrator | Friday 30 May 2025 00:58:59 +0000 (0:00:01.987) 0:00:32.625 ************ 2025-05-30 01:01:04.252508 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-30 01:01:04.252519 | orchestrator | 2025-05-30 01:01:04.252530 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-30 01:01:04.252541 | orchestrator | Friday 30 May 2025 00:58:59 +0000 (0:00:00.665) 0:00:33.290 ************ 2025-05-30 01:01:04.252552 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.252563 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:01:04.252574 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:01:04.252585 | orchestrator | 2025-05-30 01:01:04.252596 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-30 01:01:04.252607 | orchestrator | Friday 30 May 2025 00:59:01 +0000 (0:00:01.826) 0:00:35.117 ************ 2025-05-30 01:01:04.252618 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-30 01:01:04.252628 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-30 01:01:04.252639 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-30 01:01:04.252650 | orchestrator | 2025-05-30 01:01:04.252661 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-30 01:01:04.252672 | orchestrator | Friday 30 May 2025 00:59:02 +0000 (0:00:01.151) 0:00:36.269 ************ 2025-05-30 01:01:04.252683 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:04.252694 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:01:04.252705 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:01:04.252716 | orchestrator | 2025-05-30 01:01:04.252727 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-30 01:01:04.252737 | orchestrator | Friday 30 May 2025 00:59:03 +0000 (0:00:00.327) 0:00:36.596 ************ 2025-05-30 01:01:04.252822 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-30 01:01:04.252837 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-30 01:01:04.252848 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-30 01:01:04.252859 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-30 01:01:04.252870 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-30 01:01:04.252881 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-30 01:01:04.252892 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-30 01:01:04.252903 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-30 01:01:04.252922 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-30 01:01:04.252935 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-30 01:01:04.252948 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-30 01:01:04.252961 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-30 01:01:04.252974 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-30 01:01:04.252987 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-30 01:01:04.252998 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-30 01:01:04.253009 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-30 01:01:04.253020 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-30 01:01:04.253031 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-30 01:01:04.253042 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-30 01:01:04.253053 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-30 01:01:04.253064 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-30 01:01:04.253074 | orchestrator | 2025-05-30 01:01:04.253086 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-30 01:01:04.253097 | orchestrator | Friday 30 May 2025 00:59:13 +0000 (0:00:10.421) 0:00:47.017 ************ 2025-05-30 01:01:04.253107 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-30 01:01:04.253118 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-30 01:01:04.253129 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-30 01:01:04.253140 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-30 01:01:04.253151 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-30 01:01:04.253168 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-30 01:01:04.253180 | orchestrator | 2025-05-30 01:01:04.253191 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-30 01:01:04.253202 | orchestrator | Friday 30 May 2025 00:59:16 +0000 (0:00:03.217) 0:00:50.235 ************ 2025-05-30 01:01:04.253214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.253308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.253332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-30 01:01:04.253344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-30 01:01:04.253362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-30 01:01:04.253373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-30 01:01:04.253388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.253404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.253414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-30 01:01:04.253424 | orchestrator | 2025-05-30 01:01:04.253434 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-30 01:01:04.253444 | orchestrator | Friday 30 May 2025 00:59:19 +0000 (0:00:02.819) 0:00:53.054 ************ 2025-05-30 01:01:04.253454 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.253464 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:01:04.253474 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:01:04.253483 | orchestrator | 2025-05-30 01:01:04.253493 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-30 01:01:04.253503 | orchestrator | Friday 30 May 2025 00:59:19 +0000 (0:00:00.264) 0:00:53.318 ************ 2025-05-30 01:01:04.253513 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:01:04.253522 | orchestrator | 2025-05-30 01:01:04.253532 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-30 01:01:04.253542 | orchestrator | Friday 30 May 2025 00:59:22 +0000 (0:00:02.399) 0:00:55.718 ************ 2025-05-30 01:01:04.253551 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:01:04.253561 | orchestrator | 2025-05-30 01:01:04.253571 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-30 01:01:04.253580 | orchestrator | Friday 30 May 2025 00:59:24 +0000 (0:00:02.307) 0:00:58.026 ************ 2025-05-30 01:01:04.253590 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:04.253600 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:01:04.253610 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:01:04.253619 | orchestrator | 2025-05-30 01:01:04.253629 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-30 01:01:04.253639 | orchestrator | Friday 30 May 2025 00:59:25 +0000 (0:00:00.996) 0:00:59.022 ************ 2025-05-30 01:01:04.253649 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:04.253664 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:01:04.253674 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:01:04.253683 | orchestrator | 2025-05-30 01:01:04.253693 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-30 01:01:04.253703 | orchestrator | Friday 30 May 2025 00:59:26 +0000 (0:00:00.408) 0:00:59.431 ************ 2025-05-30 01:01:04.253713 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.253729 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:01:04.253738 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:01:04.253748 | orchestrator | 2025-05-30 01:01:04.253776 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-30 01:01:04.253786 | orchestrator | Friday 30 May 2025 00:59:26 +0000 (0:00:00.566) 0:00:59.997 ************ 2025-05-30 01:01:04.253795 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:01:04.253805 | orchestrator | 2025-05-30 01:01:04.253815 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-30 01:01:04.253825 | orchestrator | Friday 30 May 2025 00:59:39 +0000 (0:00:12.978) 0:01:12.976 ************ 2025-05-30 01:01:04.253834 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:01:04.253844 | orchestrator | 2025-05-30 01:01:04.253853 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-30 01:01:04.253863 | orchestrator | Friday 30 May 2025 00:59:48 +0000 (0:00:08.767) 0:01:21.743 ************ 2025-05-30 01:01:04.253873 | orchestrator | 2025-05-30 01:01:04.253882 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-30 01:01:04.253892 | orchestrator | Friday 30 May 2025 00:59:48 +0000 (0:00:00.053) 0:01:21.796 ************ 2025-05-30 01:01:04.253902 | orchestrator | 2025-05-30 01:01:04.253912 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-30 01:01:04.253921 | orchestrator | Friday 30 May 2025 00:59:48 +0000 (0:00:00.067) 0:01:21.864 ************ 2025-05-30 01:01:04.253932 | orchestrator | 2025-05-30 01:01:04.253945 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-30 01:01:04.253961 | orchestrator | Friday 30 May 2025 00:59:48 +0000 (0:00:00.056) 0:01:21.920 ************ 2025-05-30 01:01:04.253972 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:01:04.253984 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:01:04.253996 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:01:04.254007 | orchestrator | 2025-05-30 01:01:04.254045 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-30 01:01:04.254058 | orchestrator | Friday 30 May 2025 01:00:03 +0000 (0:00:14.756) 0:01:36.677 ************ 2025-05-30 01:01:04.254069 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:01:04.254081 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:01:04.254092 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:01:04.254103 | orchestrator | 2025-05-30 01:01:04.254114 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-30 01:01:04.254127 | orchestrator | Friday 30 May 2025 01:00:11 +0000 (0:00:07.795) 0:01:44.472 ************ 2025-05-30 01:01:04.254138 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:01:04.254149 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:01:04.254160 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:01:04.254172 | orchestrator | 2025-05-30 01:01:04.254182 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-30 01:01:04.254192 | orchestrator | Friday 30 May 2025 01:00:21 +0000 (0:00:10.217) 0:01:54.689 ************ 2025-05-30 01:01:04.254202 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:01:04.254212 | orchestrator | 2025-05-30 01:01:04.254221 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-30 01:01:04.254231 | orchestrator | Friday 30 May 2025 01:00:22 +0000 (0:00:00.768) 0:01:55.458 ************ 2025-05-30 01:01:04.254241 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:01:04.254250 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:04.254260 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:01:04.254270 | orchestrator | 2025-05-30 01:01:04.254280 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-30 01:01:04.254289 | orchestrator | Friday 30 May 2025 01:00:23 +0000 (0:00:01.000) 0:01:56.458 ************ 2025-05-30 01:01:04.254299 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:01:04.254309 | orchestrator | 2025-05-30 01:01:04.254318 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-30 01:01:04.254334 | orchestrator | Friday 30 May 2025 01:00:24 +0000 (0:00:01.516) 0:01:57.975 ************ 2025-05-30 01:01:04.254344 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-30 01:01:04.254354 | orchestrator | 2025-05-30 01:01:04.254467 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-30 01:01:04.254481 | orchestrator | Friday 30 May 2025 01:00:33 +0000 (0:00:08.532) 0:02:06.507 ************ 2025-05-30 01:01:04.254492 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-30 01:01:04.254502 | orchestrator | 2025-05-30 01:01:04.254512 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-30 01:01:04.254522 | orchestrator | Friday 30 May 2025 01:00:51 +0000 (0:00:18.369) 0:02:24.877 ************ 2025-05-30 01:01:04.254531 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-30 01:01:04.254541 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-30 01:01:04.254551 | orchestrator | 2025-05-30 01:01:04.254561 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-30 01:01:04.254570 | orchestrator | Friday 30 May 2025 01:00:58 +0000 (0:00:06.686) 0:02:31.563 ************ 2025-05-30 01:01:04.254580 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.254596 | orchestrator | 2025-05-30 01:01:04.254611 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-30 01:01:04.254621 | orchestrator | Friday 30 May 2025 01:00:58 +0000 (0:00:00.135) 0:02:31.699 ************ 2025-05-30 01:01:04.254631 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.254640 | orchestrator | 2025-05-30 01:01:04.254650 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-30 01:01:04.254666 | orchestrator | Friday 30 May 2025 01:00:58 +0000 (0:00:00.112) 0:02:31.811 ************ 2025-05-30 01:01:04.254676 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.254685 | orchestrator | 2025-05-30 01:01:04.254695 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-30 01:01:04.254704 | orchestrator | Friday 30 May 2025 01:00:58 +0000 (0:00:00.114) 0:02:31.926 ************ 2025-05-30 01:01:04.254714 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.254724 | orchestrator | 2025-05-30 01:01:04.254733 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-30 01:01:04.254743 | orchestrator | Friday 30 May 2025 01:00:58 +0000 (0:00:00.407) 0:02:32.334 ************ 2025-05-30 01:01:04.254772 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:04.254782 | orchestrator | 2025-05-30 01:01:04.254791 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-30 01:01:04.254801 | orchestrator | Friday 30 May 2025 01:01:02 +0000 (0:00:03.284) 0:02:35.619 ************ 2025-05-30 01:01:04.254810 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:04.254820 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:01:04.254830 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:01:04.254839 | orchestrator | 2025-05-30 01:01:04.254849 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:01:04.254859 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-30 01:01:04.254870 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-30 01:01:04.254880 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-30 01:01:04.254890 | orchestrator | 2025-05-30 01:01:04.254900 | orchestrator | 2025-05-30 01:01:04.254915 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:01:04.254926 | orchestrator | Friday 30 May 2025 01:01:02 +0000 (0:00:00.532) 0:02:36.151 ************ 2025-05-30 01:01:04.254944 | orchestrator | =============================================================================== 2025-05-30 01:01:04.254960 | orchestrator | service-ks-register : keystone | Creating services --------------------- 18.37s 2025-05-30 01:01:04.254976 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.76s 2025-05-30 01:01:04.254991 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.98s 2025-05-30 01:01:04.255007 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.42s 2025-05-30 01:01:04.255025 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.22s 2025-05-30 01:01:04.255041 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.77s 2025-05-30 01:01:04.255051 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 8.62s 2025-05-30 01:01:04.255061 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 8.53s 2025-05-30 01:01:04.255070 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.80s 2025-05-30 01:01:04.255080 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.69s 2025-05-30 01:01:04.255090 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.39s 2025-05-30 01:01:04.255099 | orchestrator | keystone : Creating default user role ----------------------------------- 3.28s 2025-05-30 01:01:04.255111 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.22s 2025-05-30 01:01:04.255122 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.13s 2025-05-30 01:01:04.255133 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.82s 2025-05-30 01:01:04.255145 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.62s 2025-05-30 01:01:04.255156 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.40s 2025-05-30 01:01:04.255167 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.31s 2025-05-30 01:01:04.255179 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.31s 2025-05-30 01:01:04.255191 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.31s 2025-05-30 01:01:04.255203 | orchestrator | 2025-05-30 01:01:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:04.255215 | orchestrator | 2025-05-30 01:01:04 | INFO  | Task edd098f8-8ea8-43c6-911e-b72a951746d9 is in state SUCCESS 2025-05-30 01:01:04.255227 | orchestrator | 2025-05-30 01:01:04 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:04.255238 | orchestrator | 2025-05-30 01:01:04 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:04.255250 | orchestrator | 2025-05-30 01:01:04 | INFO  | Task 9cd8bde0-5fbc-47f4-bde1-5cfa22744ca4 is in state STARTED 2025-05-30 01:01:04.255262 | orchestrator | 2025-05-30 01:01:04 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:04.255273 | orchestrator | 2025-05-30 01:01:04 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:04.255291 | orchestrator | 2025-05-30 01:01:04 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state STARTED 2025-05-30 01:01:04.255302 | orchestrator | 2025-05-30 01:01:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:07.307013 | orchestrator | 2025-05-30 01:01:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:07.307242 | orchestrator | 2025-05-30 01:01:07 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:07.307588 | orchestrator | 2025-05-30 01:01:07 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:07.308272 | orchestrator | 2025-05-30 01:01:07 | INFO  | Task 9cd8bde0-5fbc-47f4-bde1-5cfa22744ca4 is in state STARTED 2025-05-30 01:01:07.308930 | orchestrator | 2025-05-30 01:01:07 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:07.309429 | orchestrator | 2025-05-30 01:01:07 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:07.309928 | orchestrator | 2025-05-30 01:01:07 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state STARTED 2025-05-30 01:01:07.310090 | orchestrator | 2025-05-30 01:01:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:10.344803 | orchestrator | 2025-05-30 01:01:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:10.345015 | orchestrator | 2025-05-30 01:01:10 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:10.345966 | orchestrator | 2025-05-30 01:01:10 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:10.346405 | orchestrator | 2025-05-30 01:01:10 | INFO  | Task 9cd8bde0-5fbc-47f4-bde1-5cfa22744ca4 is in state SUCCESS 2025-05-30 01:01:10.347988 | orchestrator | 2025-05-30 01:01:10.348025 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-30 01:01:10.348037 | orchestrator | 2025-05-30 01:01:10.348049 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-30 01:01:10.348060 | orchestrator | 2025-05-30 01:01:10.348071 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-30 01:01:10.348082 | orchestrator | Friday 30 May 2025 01:00:41 +0000 (0:00:00.444) 0:00:00.444 ************ 2025-05-30 01:01:10.348093 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-30 01:01:10.348104 | orchestrator | 2025-05-30 01:01:10.348115 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-30 01:01:10.348126 | orchestrator | Friday 30 May 2025 01:00:42 +0000 (0:00:00.230) 0:00:00.674 ************ 2025-05-30 01:01:10.348137 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 01:01:10.348149 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-30 01:01:10.348160 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-30 01:01:10.348170 | orchestrator | 2025-05-30 01:01:10.348181 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-30 01:01:10.348192 | orchestrator | Friday 30 May 2025 01:00:42 +0000 (0:00:00.848) 0:00:01.523 ************ 2025-05-30 01:01:10.348203 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-30 01:01:10.348214 | orchestrator | 2025-05-30 01:01:10.348225 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-30 01:01:10.348236 | orchestrator | Friday 30 May 2025 01:00:43 +0000 (0:00:00.223) 0:00:01.746 ************ 2025-05-30 01:01:10.348247 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.348258 | orchestrator | 2025-05-30 01:01:10.348270 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-30 01:01:10.348281 | orchestrator | Friday 30 May 2025 01:00:43 +0000 (0:00:00.558) 0:00:02.304 ************ 2025-05-30 01:01:10.348292 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.348302 | orchestrator | 2025-05-30 01:01:10.348313 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-30 01:01:10.348324 | orchestrator | Friday 30 May 2025 01:00:43 +0000 (0:00:00.116) 0:00:02.421 ************ 2025-05-30 01:01:10.348335 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.348444 | orchestrator | 2025-05-30 01:01:10.348467 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-30 01:01:10.348478 | orchestrator | Friday 30 May 2025 01:00:44 +0000 (0:00:00.395) 0:00:02.816 ************ 2025-05-30 01:01:10.348489 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.348524 | orchestrator | 2025-05-30 01:01:10.348542 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-30 01:01:10.348571 | orchestrator | Friday 30 May 2025 01:00:44 +0000 (0:00:00.155) 0:00:02.971 ************ 2025-05-30 01:01:10.348590 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.348608 | orchestrator | 2025-05-30 01:01:10.348625 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-30 01:01:10.348643 | orchestrator | Friday 30 May 2025 01:00:44 +0000 (0:00:00.118) 0:00:03.089 ************ 2025-05-30 01:01:10.348662 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.348826 | orchestrator | 2025-05-30 01:01:10.348851 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-30 01:01:10.348872 | orchestrator | Friday 30 May 2025 01:00:44 +0000 (0:00:00.158) 0:00:03.247 ************ 2025-05-30 01:01:10.348891 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.348908 | orchestrator | 2025-05-30 01:01:10.348919 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-30 01:01:10.348931 | orchestrator | Friday 30 May 2025 01:00:44 +0000 (0:00:00.134) 0:00:03.382 ************ 2025-05-30 01:01:10.348941 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.348953 | orchestrator | 2025-05-30 01:01:10.348971 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-30 01:01:10.348989 | orchestrator | Friday 30 May 2025 01:00:44 +0000 (0:00:00.128) 0:00:03.511 ************ 2025-05-30 01:01:10.349007 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 01:01:10.349026 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 01:01:10.349037 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 01:01:10.349048 | orchestrator | 2025-05-30 01:01:10.349059 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-30 01:01:10.349070 | orchestrator | Friday 30 May 2025 01:00:45 +0000 (0:00:00.844) 0:00:04.355 ************ 2025-05-30 01:01:10.349081 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.349092 | orchestrator | 2025-05-30 01:01:10.349103 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-30 01:01:10.349113 | orchestrator | Friday 30 May 2025 01:00:46 +0000 (0:00:00.264) 0:00:04.619 ************ 2025-05-30 01:01:10.349124 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 01:01:10.349135 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 01:01:10.349146 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 01:01:10.349157 | orchestrator | 2025-05-30 01:01:10.349167 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-30 01:01:10.349178 | orchestrator | Friday 30 May 2025 01:00:47 +0000 (0:00:01.949) 0:00:06.569 ************ 2025-05-30 01:01:10.349189 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 01:01:10.349209 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 01:01:10.349221 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 01:01:10.349232 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.349242 | orchestrator | 2025-05-30 01:01:10.349253 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-30 01:01:10.349280 | orchestrator | Friday 30 May 2025 01:00:48 +0000 (0:00:00.418) 0:00:06.988 ************ 2025-05-30 01:01:10.349294 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-30 01:01:10.349307 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-30 01:01:10.349330 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-30 01:01:10.349342 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.349353 | orchestrator | 2025-05-30 01:01:10.349364 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-30 01:01:10.349374 | orchestrator | Friday 30 May 2025 01:00:49 +0000 (0:00:00.766) 0:00:07.755 ************ 2025-05-30 01:01:10.349387 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-30 01:01:10.349401 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-30 01:01:10.349412 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-30 01:01:10.349423 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.349437 | orchestrator | 2025-05-30 01:01:10.349450 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-30 01:01:10.349463 | orchestrator | Friday 30 May 2025 01:00:49 +0000 (0:00:00.168) 0:00:07.923 ************ 2025-05-30 01:01:10.349478 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '32cc4543507b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-30 01:00:46.633829', 'end': '2025-05-30 01:00:46.679718', 'delta': '0:00:00.045889', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['32cc4543507b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-30 01:01:10.349499 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'e82e8bdc94b8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-30 01:00:47.212807', 'end': '2025-05-30 01:00:47.257600', 'delta': '0:00:00.044793', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e82e8bdc94b8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-30 01:01:10.349523 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '4d5667bd6d83', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-30 01:00:47.768399', 'end': '2025-05-30 01:00:47.819256', 'delta': '0:00:00.050857', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4d5667bd6d83'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-30 01:01:10.349543 | orchestrator | 2025-05-30 01:01:10.349554 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-30 01:01:10.349565 | orchestrator | Friday 30 May 2025 01:00:49 +0000 (0:00:00.221) 0:00:08.144 ************ 2025-05-30 01:01:10.349577 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.349587 | orchestrator | 2025-05-30 01:01:10.349598 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-30 01:01:10.349609 | orchestrator | Friday 30 May 2025 01:00:49 +0000 (0:00:00.245) 0:00:08.389 ************ 2025-05-30 01:01:10.349619 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-30 01:01:10.349630 | orchestrator | 2025-05-30 01:01:10.349641 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-30 01:01:10.349652 | orchestrator | Friday 30 May 2025 01:00:51 +0000 (0:00:01.590) 0:00:09.980 ************ 2025-05-30 01:01:10.349662 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.349673 | orchestrator | 2025-05-30 01:01:10.349684 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-30 01:01:10.349695 | orchestrator | Friday 30 May 2025 01:00:51 +0000 (0:00:00.148) 0:00:10.129 ************ 2025-05-30 01:01:10.349706 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.349716 | orchestrator | 2025-05-30 01:01:10.349727 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-30 01:01:10.349764 | orchestrator | Friday 30 May 2025 01:00:51 +0000 (0:00:00.209) 0:00:10.338 ************ 2025-05-30 01:01:10.349778 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.349788 | orchestrator | 2025-05-30 01:01:10.349799 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-30 01:01:10.349810 | orchestrator | Friday 30 May 2025 01:00:51 +0000 (0:00:00.120) 0:00:10.459 ************ 2025-05-30 01:01:10.349821 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.349831 | orchestrator | 2025-05-30 01:01:10.349842 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-30 01:01:10.349853 | orchestrator | Friday 30 May 2025 01:00:51 +0000 (0:00:00.132) 0:00:10.591 ************ 2025-05-30 01:01:10.349863 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.349874 | orchestrator | 2025-05-30 01:01:10.349885 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-30 01:01:10.349895 | orchestrator | Friday 30 May 2025 01:00:52 +0000 (0:00:00.225) 0:00:10.817 ************ 2025-05-30 01:01:10.349906 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.349916 | orchestrator | 2025-05-30 01:01:10.349927 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-30 01:01:10.349938 | orchestrator | Friday 30 May 2025 01:00:52 +0000 (0:00:00.119) 0:00:10.937 ************ 2025-05-30 01:01:10.349949 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.349959 | orchestrator | 2025-05-30 01:01:10.349970 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-30 01:01:10.349981 | orchestrator | Friday 30 May 2025 01:00:52 +0000 (0:00:00.141) 0:00:11.079 ************ 2025-05-30 01:01:10.349991 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.350002 | orchestrator | 2025-05-30 01:01:10.350013 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-30 01:01:10.350078 | orchestrator | Friday 30 May 2025 01:00:52 +0000 (0:00:00.133) 0:00:11.212 ************ 2025-05-30 01:01:10.350098 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.350117 | orchestrator | 2025-05-30 01:01:10.350136 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-30 01:01:10.350167 | orchestrator | Friday 30 May 2025 01:00:52 +0000 (0:00:00.127) 0:00:11.340 ************ 2025-05-30 01:01:10.350186 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.350198 | orchestrator | 2025-05-30 01:01:10.350208 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-30 01:01:10.350219 | orchestrator | Friday 30 May 2025 01:00:52 +0000 (0:00:00.134) 0:00:11.475 ************ 2025-05-30 01:01:10.350230 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.350240 | orchestrator | 2025-05-30 01:01:10.350251 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-30 01:01:10.350262 | orchestrator | Friday 30 May 2025 01:00:53 +0000 (0:00:00.299) 0:00:11.774 ************ 2025-05-30 01:01:10.350273 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.350283 | orchestrator | 2025-05-30 01:01:10.350294 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-30 01:01:10.350305 | orchestrator | Friday 30 May 2025 01:00:53 +0000 (0:00:00.150) 0:00:11.925 ************ 2025-05-30 01:01:10.350347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:01:10.350371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:01:10.350383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:01:10.350395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:01:10.350406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:01:10.350417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:01:10.350428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:01:10.350447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-30 01:01:10.350475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5', 'scsi-SQEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part1', 'scsi-SQEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part14', 'scsi-SQEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part15', 'scsi-SQEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part16', 'scsi-SQEMU_QEMU_HARDDISK_edc9b60b-d3ff-41c2-8d12-039335a3b5c5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:01:10.350491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-30-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-30 01:01:10.350503 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.350514 | orchestrator | 2025-05-30 01:01:10.350525 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-30 01:01:10.350536 | orchestrator | Friday 30 May 2025 01:00:53 +0000 (0:00:00.256) 0:00:12.181 ************ 2025-05-30 01:01:10.350547 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.350558 | orchestrator | 2025-05-30 01:01:10.350568 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-30 01:01:10.350579 | orchestrator | Friday 30 May 2025 01:00:53 +0000 (0:00:00.265) 0:00:12.446 ************ 2025-05-30 01:01:10.350590 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.350601 | orchestrator | 2025-05-30 01:01:10.350611 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-30 01:01:10.350636 | orchestrator | Friday 30 May 2025 01:00:53 +0000 (0:00:00.122) 0:00:12.568 ************ 2025-05-30 01:01:10.350647 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.350657 | orchestrator | 2025-05-30 01:01:10.350668 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-30 01:01:10.350679 | orchestrator | Friday 30 May 2025 01:00:54 +0000 (0:00:00.136) 0:00:12.705 ************ 2025-05-30 01:01:10.350690 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.350701 | orchestrator | 2025-05-30 01:01:10.350711 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-30 01:01:10.350722 | orchestrator | Friday 30 May 2025 01:00:54 +0000 (0:00:00.466) 0:00:13.172 ************ 2025-05-30 01:01:10.350733 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.350783 | orchestrator | 2025-05-30 01:01:10.350796 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-30 01:01:10.350806 | orchestrator | Friday 30 May 2025 01:00:54 +0000 (0:00:00.123) 0:00:13.295 ************ 2025-05-30 01:01:10.350817 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.350828 | orchestrator | 2025-05-30 01:01:10.350839 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-30 01:01:10.350849 | orchestrator | Friday 30 May 2025 01:00:56 +0000 (0:00:01.468) 0:00:14.764 ************ 2025-05-30 01:01:10.350860 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.350871 | orchestrator | 2025-05-30 01:01:10.350882 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-30 01:01:10.350893 | orchestrator | Friday 30 May 2025 01:00:56 +0000 (0:00:00.138) 0:00:14.902 ************ 2025-05-30 01:01:10.350903 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.350914 | orchestrator | 2025-05-30 01:01:10.350925 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-30 01:01:10.350936 | orchestrator | Friday 30 May 2025 01:00:56 +0000 (0:00:00.422) 0:00:15.325 ************ 2025-05-30 01:01:10.350946 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.350957 | orchestrator | 2025-05-30 01:01:10.350968 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-30 01:01:10.350979 | orchestrator | Friday 30 May 2025 01:00:56 +0000 (0:00:00.147) 0:00:15.473 ************ 2025-05-30 01:01:10.350989 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 01:01:10.351000 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 01:01:10.351011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 01:01:10.351022 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.351032 | orchestrator | 2025-05-30 01:01:10.351043 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-30 01:01:10.351054 | orchestrator | Friday 30 May 2025 01:00:57 +0000 (0:00:00.447) 0:00:15.920 ************ 2025-05-30 01:01:10.351065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 01:01:10.351081 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 01:01:10.351092 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 01:01:10.351103 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.351113 | orchestrator | 2025-05-30 01:01:10.351131 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-30 01:01:10.351143 | orchestrator | Friday 30 May 2025 01:00:57 +0000 (0:00:00.436) 0:00:16.357 ************ 2025-05-30 01:01:10.351154 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 01:01:10.351165 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-30 01:01:10.351176 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-30 01:01:10.351186 | orchestrator | 2025-05-30 01:01:10.351197 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-30 01:01:10.351208 | orchestrator | Friday 30 May 2025 01:00:58 +0000 (0:00:01.120) 0:00:17.477 ************ 2025-05-30 01:01:10.351226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 01:01:10.351237 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 01:01:10.351247 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 01:01:10.351258 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.351269 | orchestrator | 2025-05-30 01:01:10.351279 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-30 01:01:10.351290 | orchestrator | Friday 30 May 2025 01:00:59 +0000 (0:00:00.197) 0:00:17.675 ************ 2025-05-30 01:01:10.351301 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-30 01:01:10.351311 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-30 01:01:10.351322 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-30 01:01:10.351333 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.351344 | orchestrator | 2025-05-30 01:01:10.351355 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-30 01:01:10.351365 | orchestrator | Friday 30 May 2025 01:00:59 +0000 (0:00:00.210) 0:00:17.885 ************ 2025-05-30 01:01:10.351376 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-30 01:01:10.351387 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-30 01:01:10.351398 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-30 01:01:10.351409 | orchestrator | 2025-05-30 01:01:10.351420 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-30 01:01:10.351431 | orchestrator | Friday 30 May 2025 01:00:59 +0000 (0:00:00.201) 0:00:18.087 ************ 2025-05-30 01:01:10.351441 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.351452 | orchestrator | 2025-05-30 01:01:10.351463 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-30 01:01:10.351474 | orchestrator | Friday 30 May 2025 01:00:59 +0000 (0:00:00.128) 0:00:18.216 ************ 2025-05-30 01:01:10.351485 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:01:10.351496 | orchestrator | 2025-05-30 01:01:10.351506 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-30 01:01:10.351517 | orchestrator | Friday 30 May 2025 01:00:59 +0000 (0:00:00.110) 0:00:18.327 ************ 2025-05-30 01:01:10.351528 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 01:01:10.351539 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 01:01:10.351550 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 01:01:10.351560 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-30 01:01:10.351571 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-30 01:01:10.351582 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-30 01:01:10.351592 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-30 01:01:10.351603 | orchestrator | 2025-05-30 01:01:10.351614 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-30 01:01:10.351624 | orchestrator | Friday 30 May 2025 01:01:00 +0000 (0:00:00.984) 0:00:19.312 ************ 2025-05-30 01:01:10.351635 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-30 01:01:10.351646 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-30 01:01:10.351657 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-30 01:01:10.351672 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-30 01:01:10.351691 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-30 01:01:10.351719 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-30 01:01:10.351731 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-30 01:01:10.351767 | orchestrator | 2025-05-30 01:01:10.351785 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-30 01:01:10.351805 | orchestrator | Friday 30 May 2025 01:01:02 +0000 (0:00:01.537) 0:00:20.849 ************ 2025-05-30 01:01:10.351823 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:01:10.351840 | orchestrator | 2025-05-30 01:01:10.351851 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-30 01:01:10.351862 | orchestrator | Friday 30 May 2025 01:01:02 +0000 (0:00:00.454) 0:00:21.304 ************ 2025-05-30 01:01:10.351873 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-30 01:01:10.351884 | orchestrator | 2025-05-30 01:01:10.351901 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-30 01:01:10.351913 | orchestrator | Friday 30 May 2025 01:01:03 +0000 (0:00:00.663) 0:00:21.967 ************ 2025-05-30 01:01:10.351932 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-30 01:01:10.351943 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-30 01:01:10.351954 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-30 01:01:10.351965 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-30 01:01:10.351977 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-30 01:01:10.351987 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-30 01:01:10.351998 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-30 01:01:10.352009 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-30 01:01:10.352020 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-30 01:01:10.352031 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-30 01:01:10.352041 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-30 01:01:10.352052 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-30 01:01:10.352063 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-30 01:01:10.352074 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-30 01:01:10.352085 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-30 01:01:10.352096 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-30 01:01:10.352107 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-30 01:01:10.352120 | orchestrator | 2025-05-30 01:01:10.352140 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:01:10.352160 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-30 01:01:10.352179 | orchestrator | 2025-05-30 01:01:10.352191 | orchestrator | 2025-05-30 01:01:10.352202 | orchestrator | 2025-05-30 01:01:10.352212 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:01:10.352223 | orchestrator | Friday 30 May 2025 01:01:09 +0000 (0:00:05.721) 0:00:27.688 ************ 2025-05-30 01:01:10.352234 | orchestrator | =============================================================================== 2025-05-30 01:01:10.352245 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 5.72s 2025-05-30 01:01:10.352255 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.95s 2025-05-30 01:01:10.352274 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.59s 2025-05-30 01:01:10.352285 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.54s 2025-05-30 01:01:10.352296 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 1.47s 2025-05-30 01:01:10.352307 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.12s 2025-05-30 01:01:10.352317 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.98s 2025-05-30 01:01:10.352328 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.85s 2025-05-30 01:01:10.352339 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.84s 2025-05-30 01:01:10.352350 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.77s 2025-05-30 01:01:10.352360 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.66s 2025-05-30 01:01:10.352371 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.56s 2025-05-30 01:01:10.352382 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.47s 2025-05-30 01:01:10.352393 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.45s 2025-05-30 01:01:10.352403 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.45s 2025-05-30 01:01:10.352414 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.44s 2025-05-30 01:01:10.352425 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.42s 2025-05-30 01:01:10.352436 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.42s 2025-05-30 01:01:10.352447 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.40s 2025-05-30 01:01:10.352457 | orchestrator | ceph-facts : resolve bluestore_wal_device link(s) ----------------------- 0.30s 2025-05-30 01:01:10.352468 | orchestrator | 2025-05-30 01:01:10 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:10.352479 | orchestrator | 2025-05-30 01:01:10 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:10.352495 | orchestrator | 2025-05-30 01:01:10 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state STARTED 2025-05-30 01:01:10.352506 | orchestrator | 2025-05-30 01:01:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:13.383823 | orchestrator | 2025-05-30 01:01:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:13.383951 | orchestrator | 2025-05-30 01:01:13 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:13.385289 | orchestrator | 2025-05-30 01:01:13 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:13.386363 | orchestrator | 2025-05-30 01:01:13 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:13.387537 | orchestrator | 2025-05-30 01:01:13 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:13.388589 | orchestrator | 2025-05-30 01:01:13 | INFO  | Task 1f2f793e-c31b-46b3-94ed-d62a950ff442 is in state SUCCESS 2025-05-30 01:01:13.388915 | orchestrator | 2025-05-30 01:01:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:16.443865 | orchestrator | 2025-05-30 01:01:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:16.444891 | orchestrator | 2025-05-30 01:01:16 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:16.446480 | orchestrator | 2025-05-30 01:01:16 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:16.447784 | orchestrator | 2025-05-30 01:01:16 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:16.449505 | orchestrator | 2025-05-30 01:01:16 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:16.450954 | orchestrator | 2025-05-30 01:01:16 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:16.451201 | orchestrator | 2025-05-30 01:01:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:19.512855 | orchestrator | 2025-05-30 01:01:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:19.514688 | orchestrator | 2025-05-30 01:01:19 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:19.516581 | orchestrator | 2025-05-30 01:01:19 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:19.517542 | orchestrator | 2025-05-30 01:01:19 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:19.519256 | orchestrator | 2025-05-30 01:01:19 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:19.521646 | orchestrator | 2025-05-30 01:01:19 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:19.521747 | orchestrator | 2025-05-30 01:01:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:22.562009 | orchestrator | 2025-05-30 01:01:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:22.563120 | orchestrator | 2025-05-30 01:01:22 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:22.563975 | orchestrator | 2025-05-30 01:01:22 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:22.565278 | orchestrator | 2025-05-30 01:01:22 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:22.566282 | orchestrator | 2025-05-30 01:01:22 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:22.566927 | orchestrator | 2025-05-30 01:01:22 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:22.566953 | orchestrator | 2025-05-30 01:01:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:25.605624 | orchestrator | 2025-05-30 01:01:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:25.606363 | orchestrator | 2025-05-30 01:01:25 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:25.607233 | orchestrator | 2025-05-30 01:01:25 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:25.608437 | orchestrator | 2025-05-30 01:01:25 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:25.609440 | orchestrator | 2025-05-30 01:01:25 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:25.610816 | orchestrator | 2025-05-30 01:01:25 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:25.610850 | orchestrator | 2025-05-30 01:01:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:28.651302 | orchestrator | 2025-05-30 01:01:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:28.652988 | orchestrator | 2025-05-30 01:01:28 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:28.654942 | orchestrator | 2025-05-30 01:01:28 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:28.657012 | orchestrator | 2025-05-30 01:01:28 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:28.662121 | orchestrator | 2025-05-30 01:01:28 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:28.663158 | orchestrator | 2025-05-30 01:01:28 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:28.663183 | orchestrator | 2025-05-30 01:01:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:31.708355 | orchestrator | 2025-05-30 01:01:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:31.708806 | orchestrator | 2025-05-30 01:01:31 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:31.709678 | orchestrator | 2025-05-30 01:01:31 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:31.710673 | orchestrator | 2025-05-30 01:01:31 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:31.712284 | orchestrator | 2025-05-30 01:01:31 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:31.713678 | orchestrator | 2025-05-30 01:01:31 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:31.715454 | orchestrator | 2025-05-30 01:01:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:34.766589 | orchestrator | 2025-05-30 01:01:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:34.768636 | orchestrator | 2025-05-30 01:01:34 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:34.771258 | orchestrator | 2025-05-30 01:01:34 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:34.773751 | orchestrator | 2025-05-30 01:01:34 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:34.775232 | orchestrator | 2025-05-30 01:01:34 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:34.777079 | orchestrator | 2025-05-30 01:01:34 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:34.777115 | orchestrator | 2025-05-30 01:01:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:37.819088 | orchestrator | 2025-05-30 01:01:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:37.819223 | orchestrator | 2025-05-30 01:01:37 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:37.820882 | orchestrator | 2025-05-30 01:01:37 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:37.823508 | orchestrator | 2025-05-30 01:01:37 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:37.825621 | orchestrator | 2025-05-30 01:01:37 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:37.829562 | orchestrator | 2025-05-30 01:01:37 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:37.829586 | orchestrator | 2025-05-30 01:01:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:40.875238 | orchestrator | 2025-05-30 01:01:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:40.875918 | orchestrator | 2025-05-30 01:01:40 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:40.886423 | orchestrator | 2025-05-30 01:01:40 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:40.886467 | orchestrator | 2025-05-30 01:01:40 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:40.887082 | orchestrator | 2025-05-30 01:01:40 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:40.888051 | orchestrator | 2025-05-30 01:01:40 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:40.888093 | orchestrator | 2025-05-30 01:01:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:43.951167 | orchestrator | 2025-05-30 01:01:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:43.951418 | orchestrator | 2025-05-30 01:01:43 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:43.952092 | orchestrator | 2025-05-30 01:01:43 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:43.952693 | orchestrator | 2025-05-30 01:01:43 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:43.953287 | orchestrator | 2025-05-30 01:01:43 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:43.954160 | orchestrator | 2025-05-30 01:01:43 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:43.954187 | orchestrator | 2025-05-30 01:01:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:47.009278 | orchestrator | 2025-05-30 01:01:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:47.013155 | orchestrator | 2025-05-30 01:01:47 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:47.013211 | orchestrator | 2025-05-30 01:01:47 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:47.013472 | orchestrator | 2025-05-30 01:01:47 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:47.014284 | orchestrator | 2025-05-30 01:01:47 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:47.015490 | orchestrator | 2025-05-30 01:01:47 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:47.015532 | orchestrator | 2025-05-30 01:01:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:50.053585 | orchestrator | 2025-05-30 01:01:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:50.053951 | orchestrator | 2025-05-30 01:01:50 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:50.054940 | orchestrator | 2025-05-30 01:01:50 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:50.055367 | orchestrator | 2025-05-30 01:01:50 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:50.056177 | orchestrator | 2025-05-30 01:01:50 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:50.059427 | orchestrator | 2025-05-30 01:01:50 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:50.059486 | orchestrator | 2025-05-30 01:01:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:53.107633 | orchestrator | 2025-05-30 01:01:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:53.108071 | orchestrator | 2025-05-30 01:01:53 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:53.108683 | orchestrator | 2025-05-30 01:01:53 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:53.109849 | orchestrator | 2025-05-30 01:01:53 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:53.110720 | orchestrator | 2025-05-30 01:01:53 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:53.111330 | orchestrator | 2025-05-30 01:01:53 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:53.111353 | orchestrator | 2025-05-30 01:01:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:56.139133 | orchestrator | 2025-05-30 01:01:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:56.139397 | orchestrator | 2025-05-30 01:01:56 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:56.139953 | orchestrator | 2025-05-30 01:01:56 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:56.140481 | orchestrator | 2025-05-30 01:01:56 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:56.141143 | orchestrator | 2025-05-30 01:01:56 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:56.141912 | orchestrator | 2025-05-30 01:01:56 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:56.141953 | orchestrator | 2025-05-30 01:01:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:01:59.173818 | orchestrator | 2025-05-30 01:01:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:01:59.173922 | orchestrator | 2025-05-30 01:01:59 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:01:59.174416 | orchestrator | 2025-05-30 01:01:59 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:01:59.174730 | orchestrator | 2025-05-30 01:01:59 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:01:59.175493 | orchestrator | 2025-05-30 01:01:59 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:01:59.176111 | orchestrator | 2025-05-30 01:01:59 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:01:59.176136 | orchestrator | 2025-05-30 01:01:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:02.205283 | orchestrator | 2025-05-30 01:02:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:02.205395 | orchestrator | 2025-05-30 01:02:02 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:02.205767 | orchestrator | 2025-05-30 01:02:02 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:02.206626 | orchestrator | 2025-05-30 01:02:02 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:02.207512 | orchestrator | 2025-05-30 01:02:02 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:02.208684 | orchestrator | 2025-05-30 01:02:02 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:02:02.209575 | orchestrator | 2025-05-30 01:02:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:05.235969 | orchestrator | 2025-05-30 01:02:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:05.237509 | orchestrator | 2025-05-30 01:02:05 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:05.238837 | orchestrator | 2025-05-30 01:02:05 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:05.242013 | orchestrator | 2025-05-30 01:02:05 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:05.242878 | orchestrator | 2025-05-30 01:02:05 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:05.244403 | orchestrator | 2025-05-30 01:02:05 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state STARTED 2025-05-30 01:02:05.244429 | orchestrator | 2025-05-30 01:02:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:08.266773 | orchestrator | 2025-05-30 01:02:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:08.269413 | orchestrator | 2025-05-30 01:02:08 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:08.269756 | orchestrator | 2025-05-30 01:02:08 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:08.270259 | orchestrator | 2025-05-30 01:02:08 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:08.270800 | orchestrator | 2025-05-30 01:02:08 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:08.271318 | orchestrator | 2025-05-30 01:02:08 | INFO  | Task 5d4dae60-7491-457a-a680-1ca83bc00de0 is in state STARTED 2025-05-30 01:02:08.272222 | orchestrator | 2025-05-30 01:02:08 | INFO  | Task 42db4724-c4d3-44ca-9eb2-f774e72eac63 is in state SUCCESS 2025-05-30 01:02:08.272411 | orchestrator | 2025-05-30 01:02:08.272430 | orchestrator | 2025-05-30 01:02:08.272442 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-30 01:02:08.272453 | orchestrator | 2025-05-30 01:02:08.272465 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-30 01:02:08.272477 | orchestrator | Friday 30 May 2025 01:00:33 +0000 (0:00:00.140) 0:00:00.140 ************ 2025-05-30 01:02:08.272488 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-30 01:02:08.272499 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-30 01:02:08.272526 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-30 01:02:08.272538 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-30 01:02:08.272549 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-30 01:02:08.272577 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-30 01:02:08.272589 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-30 01:02:08.272600 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-30 01:02:08.272611 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-30 01:02:08.272646 | orchestrator | 2025-05-30 01:02:08.272657 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-30 01:02:08.272668 | orchestrator | Friday 30 May 2025 01:00:36 +0000 (0:00:02.943) 0:00:03.083 ************ 2025-05-30 01:02:08.272679 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-30 01:02:08.272690 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-30 01:02:08.272701 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-30 01:02:08.272712 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-30 01:02:08.272723 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-30 01:02:08.272734 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-30 01:02:08.272745 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-30 01:02:08.272756 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-30 01:02:08.272766 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-30 01:02:08.272797 | orchestrator | 2025-05-30 01:02:08.272808 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-30 01:02:08.272819 | orchestrator | Friday 30 May 2025 01:00:36 +0000 (0:00:00.249) 0:00:03.333 ************ 2025-05-30 01:02:08.272830 | orchestrator | ok: [testbed-manager] => { 2025-05-30 01:02:08.272844 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-30 01:02:08.272858 | orchestrator | } 2025-05-30 01:02:08.272869 | orchestrator | 2025-05-30 01:02:08.272881 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-30 01:02:08.272892 | orchestrator | Friday 30 May 2025 01:00:36 +0000 (0:00:00.164) 0:00:03.498 ************ 2025-05-30 01:02:08.272903 | orchestrator | changed: [testbed-manager] 2025-05-30 01:02:08.272914 | orchestrator | 2025-05-30 01:02:08.272925 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-30 01:02:08.272935 | orchestrator | Friday 30 May 2025 01:01:09 +0000 (0:00:33.083) 0:00:36.581 ************ 2025-05-30 01:02:08.272957 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-30 01:02:08.272986 | orchestrator | 2025-05-30 01:02:08.273007 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-30 01:02:08.273026 | orchestrator | Friday 30 May 2025 01:01:09 +0000 (0:00:00.406) 0:00:36.988 ************ 2025-05-30 01:02:08.273047 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-30 01:02:08.273067 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-30 01:02:08.273089 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-30 01:02:08.273110 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-30 01:02:08.273131 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-30 01:02:08.273160 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-30 01:02:08.273175 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-30 01:02:08.273188 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-30 01:02:08.273201 | orchestrator | 2025-05-30 01:02:08.273214 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-30 01:02:08.273232 | orchestrator | Friday 30 May 2025 01:01:12 +0000 (0:00:02.296) 0:00:39.285 ************ 2025-05-30 01:02:08.273245 | orchestrator | skipping: [testbed-manager] 2025-05-30 01:02:08.273258 | orchestrator | 2025-05-30 01:02:08.273280 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:02:08.273293 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 01:02:08.273318 | orchestrator | 2025-05-30 01:02:08.273331 | orchestrator | Friday 30 May 2025 01:01:12 +0000 (0:00:00.033) 0:00:39.318 ************ 2025-05-30 01:02:08.273343 | orchestrator | =============================================================================== 2025-05-30 01:02:08.273357 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 33.08s 2025-05-30 01:02:08.273370 | orchestrator | Check ceph keys --------------------------------------------------------- 2.94s 2025-05-30 01:02:08.273380 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.30s 2025-05-30 01:02:08.273391 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.41s 2025-05-30 01:02:08.273402 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.25s 2025-05-30 01:02:08.273413 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.16s 2025-05-30 01:02:08.273424 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-05-30 01:02:08.273434 | orchestrator | 2025-05-30 01:02:08.273445 | orchestrator | 2025-05-30 01:02:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:11.295884 | orchestrator | 2025-05-30 01:02:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:11.296140 | orchestrator | 2025-05-30 01:02:11 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:11.297842 | orchestrator | 2025-05-30 01:02:11 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:11.298397 | orchestrator | 2025-05-30 01:02:11 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:11.298786 | orchestrator | 2025-05-30 01:02:11 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:11.299386 | orchestrator | 2025-05-30 01:02:11 | INFO  | Task 5d4dae60-7491-457a-a680-1ca83bc00de0 is in state STARTED 2025-05-30 01:02:11.299408 | orchestrator | 2025-05-30 01:02:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:14.346146 | orchestrator | 2025-05-30 01:02:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:14.348596 | orchestrator | 2025-05-30 01:02:14 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:14.349750 | orchestrator | 2025-05-30 01:02:14 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:14.350380 | orchestrator | 2025-05-30 01:02:14 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:14.352455 | orchestrator | 2025-05-30 01:02:14 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:14.355342 | orchestrator | 2025-05-30 01:02:14 | INFO  | Task 5d4dae60-7491-457a-a680-1ca83bc00de0 is in state STARTED 2025-05-30 01:02:14.355373 | orchestrator | 2025-05-30 01:02:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:17.395765 | orchestrator | 2025-05-30 01:02:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:17.395868 | orchestrator | 2025-05-30 01:02:17 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:17.396825 | orchestrator | 2025-05-30 01:02:17 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:17.397365 | orchestrator | 2025-05-30 01:02:17 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:17.398133 | orchestrator | 2025-05-30 01:02:17 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:17.398691 | orchestrator | 2025-05-30 01:02:17 | INFO  | Task 5d4dae60-7491-457a-a680-1ca83bc00de0 is in state STARTED 2025-05-30 01:02:17.398748 | orchestrator | 2025-05-30 01:02:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:20.433896 | orchestrator | 2025-05-30 01:02:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:20.436361 | orchestrator | 2025-05-30 01:02:20 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:20.438397 | orchestrator | 2025-05-30 01:02:20 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:20.441091 | orchestrator | 2025-05-30 01:02:20 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:20.442700 | orchestrator | 2025-05-30 01:02:20 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:20.444359 | orchestrator | 2025-05-30 01:02:20 | INFO  | Task 5d4dae60-7491-457a-a680-1ca83bc00de0 is in state STARTED 2025-05-30 01:02:20.444392 | orchestrator | 2025-05-30 01:02:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:23.470226 | orchestrator | 2025-05-30 01:02:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:23.470451 | orchestrator | 2025-05-30 01:02:23 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:23.470499 | orchestrator | 2025-05-30 01:02:23 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:23.470961 | orchestrator | 2025-05-30 01:02:23 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:23.471364 | orchestrator | 2025-05-30 01:02:23 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:23.471909 | orchestrator | 2025-05-30 01:02:23 | INFO  | Task 5d4dae60-7491-457a-a680-1ca83bc00de0 is in state STARTED 2025-05-30 01:02:23.471935 | orchestrator | 2025-05-30 01:02:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:26.500028 | orchestrator | 2025-05-30 01:02:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:26.500138 | orchestrator | 2025-05-30 01:02:26 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:26.500476 | orchestrator | 2025-05-30 01:02:26 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:26.500901 | orchestrator | 2025-05-30 01:02:26 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:26.501398 | orchestrator | 2025-05-30 01:02:26 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:26.502743 | orchestrator | 2025-05-30 01:02:26 | INFO  | Task 5d4dae60-7491-457a-a680-1ca83bc00de0 is in state STARTED 2025-05-30 01:02:26.502776 | orchestrator | 2025-05-30 01:02:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:29.537924 | orchestrator | 2025-05-30 01:02:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:29.538064 | orchestrator | 2025-05-30 01:02:29 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:29.541336 | orchestrator | 2025-05-30 01:02:29 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:29.541365 | orchestrator | 2025-05-30 01:02:29 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:29.541376 | orchestrator | 2025-05-30 01:02:29 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:29.541388 | orchestrator | 2025-05-30 01:02:29 | INFO  | Task 5d4dae60-7491-457a-a680-1ca83bc00de0 is in state STARTED 2025-05-30 01:02:29.541425 | orchestrator | 2025-05-30 01:02:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:32.565297 | orchestrator | 2025-05-30 01:02:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:32.566702 | orchestrator | 2025-05-30 01:02:32 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:32.566787 | orchestrator | 2025-05-30 01:02:32 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:32.566901 | orchestrator | 2025-05-30 01:02:32 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:32.566932 | orchestrator | 2025-05-30 01:02:32 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:32.570800 | orchestrator | 2025-05-30 01:02:32 | INFO  | Task 5d4dae60-7491-457a-a680-1ca83bc00de0 is in state STARTED 2025-05-30 01:02:32.570839 | orchestrator | 2025-05-30 01:02:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:35.597302 | orchestrator | 2025-05-30 01:02:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:35.597429 | orchestrator | 2025-05-30 01:02:35 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:35.597445 | orchestrator | 2025-05-30 01:02:35 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:35.598237 | orchestrator | 2025-05-30 01:02:35 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:35.598278 | orchestrator | 2025-05-30 01:02:35 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:35.598903 | orchestrator | 2025-05-30 01:02:35 | INFO  | Task 5d4dae60-7491-457a-a680-1ca83bc00de0 is in state STARTED 2025-05-30 01:02:35.598925 | orchestrator | 2025-05-30 01:02:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:38.633241 | orchestrator | 2025-05-30 01:02:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:38.633353 | orchestrator | 2025-05-30 01:02:38 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:38.633369 | orchestrator | 2025-05-30 01:02:38 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:38.633754 | orchestrator | 2025-05-30 01:02:38 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:38.634374 | orchestrator | 2025-05-30 01:02:38 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:38.634987 | orchestrator | 2025-05-30 01:02:38 | INFO  | Task 5d4dae60-7491-457a-a680-1ca83bc00de0 is in state STARTED 2025-05-30 01:02:38.635014 | orchestrator | 2025-05-30 01:02:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:41.667603 | orchestrator | 2025-05-30 01:02:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:41.667857 | orchestrator | 2025-05-30 01:02:41 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:41.668316 | orchestrator | 2025-05-30 01:02:41 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:41.668995 | orchestrator | 2025-05-30 01:02:41 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:41.669720 | orchestrator | 2025-05-30 01:02:41 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:41.670179 | orchestrator | 2025-05-30 01:02:41 | INFO  | Task 5d4dae60-7491-457a-a680-1ca83bc00de0 is in state SUCCESS 2025-05-30 01:02:41.670230 | orchestrator | 2025-05-30 01:02:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:41.670454 | orchestrator | 2025-05-30 01:02:41.670476 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-30 01:02:41.670488 | orchestrator | 2025-05-30 01:02:41.670500 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-30 01:02:41.670511 | orchestrator | Friday 30 May 2025 01:01:15 +0000 (0:00:00.159) 0:00:00.159 ************ 2025-05-30 01:02:41.670523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-30 01:02:41.670535 | orchestrator | 2025-05-30 01:02:41.670546 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-30 01:02:41.670557 | orchestrator | Friday 30 May 2025 01:01:15 +0000 (0:00:00.228) 0:00:00.388 ************ 2025-05-30 01:02:41.670592 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-30 01:02:41.670604 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-30 01:02:41.670615 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-30 01:02:41.670626 | orchestrator | 2025-05-30 01:02:41.670638 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-30 01:02:41.670648 | orchestrator | Friday 30 May 2025 01:01:16 +0000 (0:00:01.157) 0:00:01.545 ************ 2025-05-30 01:02:41.670660 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-30 01:02:41.670671 | orchestrator | 2025-05-30 01:02:41.670681 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-30 01:02:41.670692 | orchestrator | Friday 30 May 2025 01:01:17 +0000 (0:00:01.092) 0:00:02.638 ************ 2025-05-30 01:02:41.670703 | orchestrator | changed: [testbed-manager] 2025-05-30 01:02:41.670714 | orchestrator | 2025-05-30 01:02:41.670725 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-30 01:02:41.670736 | orchestrator | Friday 30 May 2025 01:01:18 +0000 (0:00:00.898) 0:00:03.536 ************ 2025-05-30 01:02:41.670747 | orchestrator | changed: [testbed-manager] 2025-05-30 01:02:41.670758 | orchestrator | 2025-05-30 01:02:41.670769 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-30 01:02:41.670780 | orchestrator | Friday 30 May 2025 01:01:19 +0000 (0:00:00.987) 0:00:04.524 ************ 2025-05-30 01:02:41.670791 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-30 01:02:41.670802 | orchestrator | ok: [testbed-manager] 2025-05-30 01:02:41.670813 | orchestrator | 2025-05-30 01:02:41.670824 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-30 01:02:41.670835 | orchestrator | Friday 30 May 2025 01:01:58 +0000 (0:00:38.649) 0:00:43.173 ************ 2025-05-30 01:02:41.670846 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-30 01:02:41.670857 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-30 01:02:41.670868 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-30 01:02:41.670879 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-30 01:02:41.670890 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-30 01:02:41.670901 | orchestrator | 2025-05-30 01:02:41.670911 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-30 01:02:41.670938 | orchestrator | Friday 30 May 2025 01:02:01 +0000 (0:00:03.491) 0:00:46.665 ************ 2025-05-30 01:02:41.670949 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-30 01:02:41.670960 | orchestrator | 2025-05-30 01:02:41.670971 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-30 01:02:41.670982 | orchestrator | Friday 30 May 2025 01:02:02 +0000 (0:00:00.339) 0:00:47.004 ************ 2025-05-30 01:02:41.670993 | orchestrator | skipping: [testbed-manager] 2025-05-30 01:02:41.671004 | orchestrator | 2025-05-30 01:02:41.671015 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-30 01:02:41.671036 | orchestrator | Friday 30 May 2025 01:02:02 +0000 (0:00:00.089) 0:00:47.094 ************ 2025-05-30 01:02:41.671047 | orchestrator | skipping: [testbed-manager] 2025-05-30 01:02:41.671058 | orchestrator | 2025-05-30 01:02:41.671069 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-30 01:02:41.671080 | orchestrator | Friday 30 May 2025 01:02:02 +0000 (0:00:00.233) 0:00:47.327 ************ 2025-05-30 01:02:41.671090 | orchestrator | changed: [testbed-manager] 2025-05-30 01:02:41.671101 | orchestrator | 2025-05-30 01:02:41.671113 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-30 01:02:41.671124 | orchestrator | Friday 30 May 2025 01:02:03 +0000 (0:00:01.120) 0:00:48.448 ************ 2025-05-30 01:02:41.671134 | orchestrator | changed: [testbed-manager] 2025-05-30 01:02:41.671145 | orchestrator | 2025-05-30 01:02:41.671156 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-30 01:02:41.671166 | orchestrator | Friday 30 May 2025 01:02:04 +0000 (0:00:00.728) 0:00:49.177 ************ 2025-05-30 01:02:41.671177 | orchestrator | changed: [testbed-manager] 2025-05-30 01:02:41.671188 | orchestrator | 2025-05-30 01:02:41.671198 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-30 01:02:41.671209 | orchestrator | Friday 30 May 2025 01:02:04 +0000 (0:00:00.450) 0:00:49.628 ************ 2025-05-30 01:02:41.671220 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-30 01:02:41.671231 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-30 01:02:41.671241 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-30 01:02:41.671252 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-30 01:02:41.671263 | orchestrator | 2025-05-30 01:02:41.671273 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:02:41.671284 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-30 01:02:41.671296 | orchestrator | 2025-05-30 01:02:41.671318 | orchestrator | Friday 30 May 2025 01:02:05 +0000 (0:00:01.129) 0:00:50.757 ************ 2025-05-30 01:02:41.671329 | orchestrator | =============================================================================== 2025-05-30 01:02:41.671340 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 38.65s 2025-05-30 01:02:41.671351 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.49s 2025-05-30 01:02:41.671361 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.16s 2025-05-30 01:02:41.671372 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.13s 2025-05-30 01:02:41.671382 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.12s 2025-05-30 01:02:41.671393 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.09s 2025-05-30 01:02:41.671404 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.99s 2025-05-30 01:02:41.671414 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.90s 2025-05-30 01:02:41.671425 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.73s 2025-05-30 01:02:41.671435 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.45s 2025-05-30 01:02:41.671446 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.34s 2025-05-30 01:02:41.671457 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.23s 2025-05-30 01:02:41.671467 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-05-30 01:02:41.671478 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.09s 2025-05-30 01:02:41.671489 | orchestrator | 2025-05-30 01:02:44.698470 | orchestrator | 2025-05-30 01:02:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:44.704295 | orchestrator | 2025-05-30 01:02:44 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:44.707362 | orchestrator | 2025-05-30 01:02:44 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:44.708877 | orchestrator | 2025-05-30 01:02:44 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:44.709998 | orchestrator | 2025-05-30 01:02:44 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:44.712414 | orchestrator | 2025-05-30 01:02:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:47.743240 | orchestrator | 2025-05-30 01:02:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:47.743706 | orchestrator | 2025-05-30 01:02:47 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:47.747826 | orchestrator | 2025-05-30 01:02:47 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:47.748484 | orchestrator | 2025-05-30 01:02:47 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:47.749348 | orchestrator | 2025-05-30 01:02:47 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:47.749369 | orchestrator | 2025-05-30 01:02:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:50.789126 | orchestrator | 2025-05-30 01:02:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:50.789238 | orchestrator | 2025-05-30 01:02:50 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:50.791535 | orchestrator | 2025-05-30 01:02:50 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:50.791622 | orchestrator | 2025-05-30 01:02:50 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:50.793229 | orchestrator | 2025-05-30 01:02:50 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:50.793254 | orchestrator | 2025-05-30 01:02:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:53.830392 | orchestrator | 2025-05-30 01:02:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:53.831115 | orchestrator | 2025-05-30 01:02:53 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:53.831983 | orchestrator | 2025-05-30 01:02:53 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:53.833104 | orchestrator | 2025-05-30 01:02:53 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:53.834346 | orchestrator | 2025-05-30 01:02:53 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:53.834428 | orchestrator | 2025-05-30 01:02:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:56.869392 | orchestrator | 2025-05-30 01:02:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:56.869647 | orchestrator | 2025-05-30 01:02:56 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:56.869678 | orchestrator | 2025-05-30 01:02:56 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:56.871316 | orchestrator | 2025-05-30 01:02:56 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:56.872348 | orchestrator | 2025-05-30 01:02:56 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:56.872449 | orchestrator | 2025-05-30 01:02:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:02:59.917567 | orchestrator | 2025-05-30 01:02:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:02:59.917775 | orchestrator | 2025-05-30 01:02:59 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:02:59.917811 | orchestrator | 2025-05-30 01:02:59 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:02:59.918168 | orchestrator | 2025-05-30 01:02:59 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:02:59.919050 | orchestrator | 2025-05-30 01:02:59 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:02:59.919306 | orchestrator | 2025-05-30 01:02:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:02.950445 | orchestrator | 2025-05-30 01:03:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:02.950612 | orchestrator | 2025-05-30 01:03:02 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:02.950845 | orchestrator | 2025-05-30 01:03:02 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:03:02.951503 | orchestrator | 2025-05-30 01:03:02 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:02.952137 | orchestrator | 2025-05-30 01:03:02 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:02.952161 | orchestrator | 2025-05-30 01:03:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:05.974109 | orchestrator | 2025-05-30 01:03:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:05.976413 | orchestrator | 2025-05-30 01:03:05 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:05.976997 | orchestrator | 2025-05-30 01:03:05 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:03:05.977201 | orchestrator | 2025-05-30 01:03:05 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:05.977720 | orchestrator | 2025-05-30 01:03:05 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:05.977751 | orchestrator | 2025-05-30 01:03:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:09.009139 | orchestrator | 2025-05-30 01:03:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:09.009917 | orchestrator | 2025-05-30 01:03:09 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:09.010893 | orchestrator | 2025-05-30 01:03:09 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:03:09.012048 | orchestrator | 2025-05-30 01:03:09 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:09.012891 | orchestrator | 2025-05-30 01:03:09 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:09.012913 | orchestrator | 2025-05-30 01:03:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:12.054805 | orchestrator | 2025-05-30 01:03:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:12.054913 | orchestrator | 2025-05-30 01:03:12 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:12.054928 | orchestrator | 2025-05-30 01:03:12 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:03:12.054940 | orchestrator | 2025-05-30 01:03:12 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:12.055235 | orchestrator | 2025-05-30 01:03:12 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:12.055260 | orchestrator | 2025-05-30 01:03:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:15.090875 | orchestrator | 2025-05-30 01:03:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:15.092763 | orchestrator | 2025-05-30 01:03:15 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:15.095126 | orchestrator | 2025-05-30 01:03:15 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:03:15.096463 | orchestrator | 2025-05-30 01:03:15 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:15.098283 | orchestrator | 2025-05-30 01:03:15 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:15.098430 | orchestrator | 2025-05-30 01:03:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:18.130162 | orchestrator | 2025-05-30 01:03:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:18.130282 | orchestrator | 2025-05-30 01:03:18 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:18.130920 | orchestrator | 2025-05-30 01:03:18 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:03:18.131365 | orchestrator | 2025-05-30 01:03:18 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:18.131849 | orchestrator | 2025-05-30 01:03:18 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:18.133808 | orchestrator | 2025-05-30 01:03:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:21.174191 | orchestrator | 2025-05-30 01:03:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:21.174351 | orchestrator | 2025-05-30 01:03:21 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:21.174829 | orchestrator | 2025-05-30 01:03:21 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state STARTED 2025-05-30 01:03:21.175439 | orchestrator | 2025-05-30 01:03:21 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:21.176397 | orchestrator | 2025-05-30 01:03:21 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:21.176469 | orchestrator | 2025-05-30 01:03:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:24.210464 | orchestrator | 2025-05-30 01:03:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:24.210668 | orchestrator | 2025-05-30 01:03:24 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:03:24.210751 | orchestrator | 2025-05-30 01:03:24 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:24.211430 | orchestrator | 2025-05-30 01:03:24 | INFO  | Task c681daaf-eea7-4477-9a3c-03af4b0d214a is in state SUCCESS 2025-05-30 01:03:24.213374 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-30 01:03:24.213406 | orchestrator | 2025-05-30 01:03:24.213417 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-30 01:03:24.213451 | orchestrator | 2025-05-30 01:03:24.213462 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-30 01:03:24.213548 | orchestrator | Friday 30 May 2025 01:02:08 +0000 (0:00:00.323) 0:00:00.323 ************ 2025-05-30 01:03:24.213563 | orchestrator | changed: [testbed-manager] 2025-05-30 01:03:24.213596 | orchestrator | 2025-05-30 01:03:24.213606 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-30 01:03:24.213616 | orchestrator | Friday 30 May 2025 01:02:10 +0000 (0:00:01.643) 0:00:01.966 ************ 2025-05-30 01:03:24.213636 | orchestrator | changed: [testbed-manager] 2025-05-30 01:03:24.213646 | orchestrator | 2025-05-30 01:03:24.213656 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-30 01:03:24.213688 | orchestrator | Friday 30 May 2025 01:02:11 +0000 (0:00:00.927) 0:00:02.894 ************ 2025-05-30 01:03:24.213698 | orchestrator | changed: [testbed-manager] 2025-05-30 01:03:24.213707 | orchestrator | 2025-05-30 01:03:24.213717 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-30 01:03:24.213726 | orchestrator | Friday 30 May 2025 01:02:12 +0000 (0:00:00.747) 0:00:03.642 ************ 2025-05-30 01:03:24.213736 | orchestrator | changed: [testbed-manager] 2025-05-30 01:03:24.213745 | orchestrator | 2025-05-30 01:03:24.213755 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-30 01:03:24.213765 | orchestrator | Friday 30 May 2025 01:02:13 +0000 (0:00:00.987) 0:00:04.629 ************ 2025-05-30 01:03:24.213774 | orchestrator | changed: [testbed-manager] 2025-05-30 01:03:24.213784 | orchestrator | 2025-05-30 01:03:24.213793 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-30 01:03:24.213803 | orchestrator | Friday 30 May 2025 01:02:13 +0000 (0:00:00.854) 0:00:05.483 ************ 2025-05-30 01:03:24.213812 | orchestrator | changed: [testbed-manager] 2025-05-30 01:03:24.213822 | orchestrator | 2025-05-30 01:03:24.213831 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-30 01:03:24.213841 | orchestrator | Friday 30 May 2025 01:02:14 +0000 (0:00:00.944) 0:00:06.428 ************ 2025-05-30 01:03:24.213851 | orchestrator | changed: [testbed-manager] 2025-05-30 01:03:24.213860 | orchestrator | 2025-05-30 01:03:24.213870 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-30 01:03:24.213879 | orchestrator | Friday 30 May 2025 01:02:15 +0000 (0:00:01.075) 0:00:07.504 ************ 2025-05-30 01:03:24.213889 | orchestrator | changed: [testbed-manager] 2025-05-30 01:03:24.213898 | orchestrator | 2025-05-30 01:03:24.213908 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-30 01:03:24.213917 | orchestrator | Friday 30 May 2025 01:02:17 +0000 (0:00:01.116) 0:00:08.620 ************ 2025-05-30 01:03:24.213927 | orchestrator | changed: [testbed-manager] 2025-05-30 01:03:24.213936 | orchestrator | 2025-05-30 01:03:24.213946 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-30 01:03:24.213955 | orchestrator | Friday 30 May 2025 01:02:34 +0000 (0:00:17.831) 0:00:26.451 ************ 2025-05-30 01:03:24.213965 | orchestrator | skipping: [testbed-manager] 2025-05-30 01:03:24.213976 | orchestrator | 2025-05-30 01:03:24.213988 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-30 01:03:24.213999 | orchestrator | 2025-05-30 01:03:24.214010 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-30 01:03:24.214072 | orchestrator | Friday 30 May 2025 01:02:35 +0000 (0:00:00.594) 0:00:27.046 ************ 2025-05-30 01:03:24.214084 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:03:24.214095 | orchestrator | 2025-05-30 01:03:24.214106 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-30 01:03:24.214117 | orchestrator | 2025-05-30 01:03:24.214128 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-30 01:03:24.214139 | orchestrator | Friday 30 May 2025 01:02:37 +0000 (0:00:01.975) 0:00:29.021 ************ 2025-05-30 01:03:24.214150 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:03:24.214162 | orchestrator | 2025-05-30 01:03:24.214172 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-30 01:03:24.214183 | orchestrator | 2025-05-30 01:03:24.214195 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-30 01:03:24.214206 | orchestrator | Friday 30 May 2025 01:02:39 +0000 (0:00:01.736) 0:00:30.758 ************ 2025-05-30 01:03:24.214269 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:03:24.214280 | orchestrator | 2025-05-30 01:03:24.214291 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:03:24.214304 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-30 01:03:24.214317 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:03:24.214330 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:03:24.214341 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:03:24.214352 | orchestrator | 2025-05-30 01:03:24.214361 | orchestrator | 2025-05-30 01:03:24.214371 | orchestrator | 2025-05-30 01:03:24.214381 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:03:24.214398 | orchestrator | Friday 30 May 2025 01:02:40 +0000 (0:00:01.484) 0:00:32.242 ************ 2025-05-30 01:03:24.214408 | orchestrator | =============================================================================== 2025-05-30 01:03:24.214418 | orchestrator | Create admin user ------------------------------------------------------ 17.83s 2025-05-30 01:03:24.214440 | orchestrator | Restart ceph manager service -------------------------------------------- 5.20s 2025-05-30 01:03:24.214450 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.64s 2025-05-30 01:03:24.214460 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.12s 2025-05-30 01:03:24.214470 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.08s 2025-05-30 01:03:24.214479 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.99s 2025-05-30 01:03:24.214515 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.94s 2025-05-30 01:03:24.214526 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.93s 2025-05-30 01:03:24.214535 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.85s 2025-05-30 01:03:24.214545 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.75s 2025-05-30 01:03:24.214555 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.59s 2025-05-30 01:03:24.214564 | orchestrator | 2025-05-30 01:03:24.214574 | orchestrator | 2025-05-30 01:03:24.214584 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:03:24.214593 | orchestrator | 2025-05-30 01:03:24.214603 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:03:24.214612 | orchestrator | Friday 30 May 2025 01:01:06 +0000 (0:00:00.384) 0:00:00.384 ************ 2025-05-30 01:03:24.214622 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:03:24.214680 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:03:24.214691 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:03:24.214700 | orchestrator | 2025-05-30 01:03:24.214710 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:03:24.214719 | orchestrator | Friday 30 May 2025 01:01:07 +0000 (0:00:00.436) 0:00:00.821 ************ 2025-05-30 01:03:24.214729 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-30 01:03:24.214739 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-30 01:03:24.214749 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-30 01:03:24.214758 | orchestrator | 2025-05-30 01:03:24.214768 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-30 01:03:24.214778 | orchestrator | 2025-05-30 01:03:24.214787 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-30 01:03:24.214797 | orchestrator | Friday 30 May 2025 01:01:07 +0000 (0:00:00.314) 0:00:01.136 ************ 2025-05-30 01:03:24.214814 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:03:24.214837 | orchestrator | 2025-05-30 01:03:24.214847 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-30 01:03:24.214857 | orchestrator | Friday 30 May 2025 01:01:07 +0000 (0:00:00.560) 0:00:01.696 ************ 2025-05-30 01:03:24.214866 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-30 01:03:24.214876 | orchestrator | 2025-05-30 01:03:24.214886 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-30 01:03:24.214895 | orchestrator | Friday 30 May 2025 01:01:11 +0000 (0:00:03.219) 0:00:04.916 ************ 2025-05-30 01:03:24.214905 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-30 01:03:24.214914 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-30 01:03:24.214929 | orchestrator | 2025-05-30 01:03:24.214945 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-30 01:03:24.214961 | orchestrator | Friday 30 May 2025 01:01:17 +0000 (0:00:06.368) 0:00:11.285 ************ 2025-05-30 01:03:24.214976 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating projects (5 retries left). 2025-05-30 01:03:24.214992 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-30 01:03:24.215008 | orchestrator | 2025-05-30 01:03:24.215023 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-30 01:03:24.215033 | orchestrator | Friday 30 May 2025 01:01:33 +0000 (0:00:16.387) 0:00:27.673 ************ 2025-05-30 01:03:24.215043 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-30 01:03:24.215052 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-30 01:03:24.215062 | orchestrator | 2025-05-30 01:03:24.215072 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-30 01:03:24.215081 | orchestrator | Friday 30 May 2025 01:01:37 +0000 (0:00:03.722) 0:00:31.395 ************ 2025-05-30 01:03:24.215091 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-30 01:03:24.215101 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-30 01:03:24.215110 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-30 01:03:24.215120 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-30 01:03:24.215130 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-30 01:03:24.215139 | orchestrator | 2025-05-30 01:03:24.215149 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-30 01:03:24.215158 | orchestrator | Friday 30 May 2025 01:01:53 +0000 (0:00:15.393) 0:00:46.789 ************ 2025-05-30 01:03:24.215168 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-30 01:03:24.215178 | orchestrator | 2025-05-30 01:03:24.215187 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-30 01:03:24.215204 | orchestrator | Friday 30 May 2025 01:01:57 +0000 (0:00:04.017) 0:00:50.806 ************ 2025-05-30 01:03:24.215279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.215323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.215336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.215348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.215372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.215383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.215401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.215413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.215423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.215433 | orchestrator | 2025-05-30 01:03:24.215461 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-30 01:03:24.215472 | orchestrator | Friday 30 May 2025 01:01:59 +0000 (0:00:02.165) 0:00:52.972 ************ 2025-05-30 01:03:24.215482 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-30 01:03:24.215522 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-30 01:03:24.215539 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-30 01:03:24.215556 | orchestrator | 2025-05-30 01:03:24.215572 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-30 01:03:24.215589 | orchestrator | Friday 30 May 2025 01:02:02 +0000 (0:00:02.975) 0:00:55.948 ************ 2025-05-30 01:03:24.215601 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:03:24.215610 | orchestrator | 2025-05-30 01:03:24.215620 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-30 01:03:24.215629 | orchestrator | Friday 30 May 2025 01:02:02 +0000 (0:00:00.176) 0:00:56.124 ************ 2025-05-30 01:03:24.215638 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:03:24.215648 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:03:24.215658 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:03:24.215667 | orchestrator | 2025-05-30 01:03:24.215677 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-30 01:03:24.215686 | orchestrator | Friday 30 May 2025 01:02:03 +0000 (0:00:00.662) 0:00:56.787 ************ 2025-05-30 01:03:24.215696 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:03:24.215706 | orchestrator | 2025-05-30 01:03:24.215716 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-30 01:03:24.215731 | orchestrator | Friday 30 May 2025 01:02:05 +0000 (0:00:02.031) 0:00:58.819 ************ 2025-05-30 01:03:24.215758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.215770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.215781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.215792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.215813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.215831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.215842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.215852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.215862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.215872 | orchestrator | 2025-05-30 01:03:24.215882 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-30 01:03:24.215892 | orchestrator | Friday 30 May 2025 01:02:09 +0000 (0:00:04.614) 0:01:03.433 ************ 2025-05-30 01:03:24.215902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-30 01:03:24.215953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.215966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.215977 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:03:24.215987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-30 01:03:24.215998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216024 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:03:24.216044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-30 01:03:24.216056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216076 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:03:24.216086 | orchestrator | 2025-05-30 01:03:24.216096 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-30 01:03:24.216105 | orchestrator | Friday 30 May 2025 01:02:10 +0000 (0:00:00.610) 0:01:04.043 ************ 2025-05-30 01:03:24.216115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-30 01:03:24.216126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216162 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:03:24.216172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-30 01:03:24.216183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216203 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:03:24.216213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-30 01:03:24.216243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216264 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:03:24.216274 | orchestrator | 2025-05-30 01:03:24.216284 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-30 01:03:24.216294 | orchestrator | Friday 30 May 2025 01:02:11 +0000 (0:00:01.569) 0:01:05.613 ************ 2025-05-30 01:03:24.216304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.216314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.216331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.216364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.216376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.216386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.216397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.216413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.216423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.216433 | orchestrator | 2025-05-30 01:03:24.216443 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-30 01:03:24.216457 | orchestrator | Friday 30 May 2025 01:02:16 +0000 (0:00:04.990) 0:01:10.603 ************ 2025-05-30 01:03:24.216467 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:03:24.216477 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:03:24.216531 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:03:24.216543 | orchestrator | 2025-05-30 01:03:24.216553 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-30 01:03:24.216569 | orchestrator | Friday 30 May 2025 01:02:20 +0000 (0:00:03.328) 0:01:13.932 ************ 2025-05-30 01:03:24.216580 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-30 01:03:24.216589 | orchestrator | 2025-05-30 01:03:24.216599 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-30 01:03:24.216608 | orchestrator | Friday 30 May 2025 01:02:21 +0000 (0:00:01.192) 0:01:15.124 ************ 2025-05-30 01:03:24.216618 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:03:24.216627 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:03:24.216637 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:03:24.216646 | orchestrator | 2025-05-30 01:03:24.216656 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-30 01:03:24.216665 | orchestrator | Friday 30 May 2025 01:02:23 +0000 (0:00:01.859) 0:01:16.983 ************ 2025-05-30 01:03:24.216676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.216687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.216706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.216726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.216737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.216748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.216758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.216774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.216784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.216794 | orchestrator | 2025-05-30 01:03:24.216804 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-30 01:03:24.216814 | orchestrator | Friday 30 May 2025 01:02:32 +0000 (0:00:09.358) 0:01:26.341 ************ 2025-05-30 01:03:24.216835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-30 01:03:24.216847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216876 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:03:24.216886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-30 01:03:24.216897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216927 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:03:24.216937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-30 01:03:24.216947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:03:24.216974 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:03:24.216983 | orchestrator | 2025-05-30 01:03:24.216993 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-30 01:03:24.217003 | orchestrator | Friday 30 May 2025 01:02:33 +0000 (0:00:01.044) 0:01:27.386 ************ 2025-05-30 01:03:24.217020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.217054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.217071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-30 01:03:24.217089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.217100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.217110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.217147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.217160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.217170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:03:24.217186 | orchestrator | 2025-05-30 01:03:24.217196 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-30 01:03:24.217206 | orchestrator | Friday 30 May 2025 01:02:37 +0000 (0:00:03.581) 0:01:30.967 ************ 2025-05-30 01:03:24.217216 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:03:24.217226 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:03:24.217235 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:03:24.217245 | orchestrator | 2025-05-30 01:03:24.217254 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-30 01:03:24.217264 | orchestrator | Friday 30 May 2025 01:02:37 +0000 (0:00:00.249) 0:01:31.217 ************ 2025-05-30 01:03:24.217274 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:03:24.217283 | orchestrator | 2025-05-30 01:03:24.217293 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-30 01:03:24.217303 | orchestrator | Friday 30 May 2025 01:02:39 +0000 (0:00:02.457) 0:01:33.675 ************ 2025-05-30 01:03:24.217312 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:03:24.217322 | orchestrator | 2025-05-30 01:03:24.217331 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-30 01:03:24.217341 | orchestrator | Friday 30 May 2025 01:02:42 +0000 (0:00:02.485) 0:01:36.160 ************ 2025-05-30 01:03:24.217351 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:03:24.217360 | orchestrator | 2025-05-30 01:03:24.217370 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-30 01:03:24.217379 | orchestrator | Friday 30 May 2025 01:02:52 +0000 (0:00:10.396) 0:01:46.557 ************ 2025-05-30 01:03:24.217389 | orchestrator | 2025-05-30 01:03:24.217399 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-30 01:03:24.217408 | orchestrator | Friday 30 May 2025 01:02:52 +0000 (0:00:00.055) 0:01:46.613 ************ 2025-05-30 01:03:24.217418 | orchestrator | 2025-05-30 01:03:24.217428 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-30 01:03:24.217437 | orchestrator | Friday 30 May 2025 01:02:53 +0000 (0:00:00.194) 0:01:46.807 ************ 2025-05-30 01:03:24.217447 | orchestrator | 2025-05-30 01:03:24.217457 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-30 01:03:24.217466 | orchestrator | Friday 30 May 2025 01:02:53 +0000 (0:00:00.118) 0:01:46.925 ************ 2025-05-30 01:03:24.217476 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:03:24.217538 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:03:24.217551 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:03:24.217560 | orchestrator | 2025-05-30 01:03:24.217570 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-30 01:03:24.217580 | orchestrator | Friday 30 May 2025 01:03:00 +0000 (0:00:07.702) 0:01:54.628 ************ 2025-05-30 01:03:24.217589 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:03:24.217599 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:03:24.217609 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:03:24.217618 | orchestrator | 2025-05-30 01:03:24.217628 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-30 01:03:24.217638 | orchestrator | Friday 30 May 2025 01:03:11 +0000 (0:00:10.680) 0:02:05.308 ************ 2025-05-30 01:03:24.217648 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:03:24.217657 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:03:24.217667 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:03:24.217683 | orchestrator | 2025-05-30 01:03:24.217693 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:03:24.217703 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-30 01:03:24.217718 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-30 01:03:24.217728 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-30 01:03:24.217739 | orchestrator | 2025-05-30 01:03:24.217748 | orchestrator | 2025-05-30 01:03:24.217765 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:03:24.217775 | orchestrator | Friday 30 May 2025 01:03:22 +0000 (0:00:11.151) 0:02:16.460 ************ 2025-05-30 01:03:24.217785 | orchestrator | =============================================================================== 2025-05-30 01:03:24.217794 | orchestrator | service-ks-register : barbican | Creating projects --------------------- 16.39s 2025-05-30 01:03:24.217804 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.39s 2025-05-30 01:03:24.217813 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.15s 2025-05-30 01:03:24.217823 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.68s 2025-05-30 01:03:24.217833 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.40s 2025-05-30 01:03:24.217842 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.36s 2025-05-30 01:03:24.217852 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.70s 2025-05-30 01:03:24.217861 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.37s 2025-05-30 01:03:24.217871 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.99s 2025-05-30 01:03:24.217880 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.61s 2025-05-30 01:03:24.217890 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.02s 2025-05-30 01:03:24.217900 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.72s 2025-05-30 01:03:24.217909 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.58s 2025-05-30 01:03:24.217919 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.33s 2025-05-30 01:03:24.217928 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.22s 2025-05-30 01:03:24.217938 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.98s 2025-05-30 01:03:24.217948 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.49s 2025-05-30 01:03:24.217957 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.46s 2025-05-30 01:03:24.217967 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.17s 2025-05-30 01:03:24.217976 | orchestrator | barbican : include_tasks ------------------------------------------------ 2.03s 2025-05-30 01:03:24.217986 | orchestrator | 2025-05-30 01:03:24 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:24.217996 | orchestrator | 2025-05-30 01:03:24 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:24.218006 | orchestrator | 2025-05-30 01:03:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:27.244980 | orchestrator | 2025-05-30 01:03:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:27.245075 | orchestrator | 2025-05-30 01:03:27 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:03:27.245743 | orchestrator | 2025-05-30 01:03:27 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:27.246230 | orchestrator | 2025-05-30 01:03:27 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:27.246919 | orchestrator | 2025-05-30 01:03:27 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:27.246938 | orchestrator | 2025-05-30 01:03:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:30.289467 | orchestrator | 2025-05-30 01:03:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:30.291918 | orchestrator | 2025-05-30 01:03:30 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:03:30.294229 | orchestrator | 2025-05-30 01:03:30 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:30.295028 | orchestrator | 2025-05-30 01:03:30 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:30.296144 | orchestrator | 2025-05-30 01:03:30 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:30.296187 | orchestrator | 2025-05-30 01:03:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:33.342557 | orchestrator | 2025-05-30 01:03:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:33.343383 | orchestrator | 2025-05-30 01:03:33 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:03:33.343446 | orchestrator | 2025-05-30 01:03:33 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:33.343525 | orchestrator | 2025-05-30 01:03:33 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:33.344023 | orchestrator | 2025-05-30 01:03:33 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:33.344045 | orchestrator | 2025-05-30 01:03:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:36.388552 | orchestrator | 2025-05-30 01:03:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:36.388772 | orchestrator | 2025-05-30 01:03:36 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:03:36.389371 | orchestrator | 2025-05-30 01:03:36 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:36.390180 | orchestrator | 2025-05-30 01:03:36 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:36.390838 | orchestrator | 2025-05-30 01:03:36 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:36.391071 | orchestrator | 2025-05-30 01:03:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:39.413223 | orchestrator | 2025-05-30 01:03:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:39.413522 | orchestrator | 2025-05-30 01:03:39 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:03:39.414171 | orchestrator | 2025-05-30 01:03:39 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:39.414858 | orchestrator | 2025-05-30 01:03:39 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:39.415544 | orchestrator | 2025-05-30 01:03:39 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:39.415662 | orchestrator | 2025-05-30 01:03:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:42.455946 | orchestrator | 2025-05-30 01:03:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:42.459178 | orchestrator | 2025-05-30 01:03:42 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:03:42.461046 | orchestrator | 2025-05-30 01:03:42 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:42.467417 | orchestrator | 2025-05-30 01:03:42 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:42.469024 | orchestrator | 2025-05-30 01:03:42 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:42.469068 | orchestrator | 2025-05-30 01:03:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:45.522203 | orchestrator | 2025-05-30 01:03:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:45.524062 | orchestrator | 2025-05-30 01:03:45 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:03:45.525065 | orchestrator | 2025-05-30 01:03:45 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:45.526557 | orchestrator | 2025-05-30 01:03:45 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:45.527850 | orchestrator | 2025-05-30 01:03:45 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:45.527886 | orchestrator | 2025-05-30 01:03:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:48.580195 | orchestrator | 2025-05-30 01:03:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:48.581655 | orchestrator | 2025-05-30 01:03:48 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:03:48.584365 | orchestrator | 2025-05-30 01:03:48 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:48.586971 | orchestrator | 2025-05-30 01:03:48 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:48.588988 | orchestrator | 2025-05-30 01:03:48 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:48.589026 | orchestrator | 2025-05-30 01:03:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:51.651219 | orchestrator | 2025-05-30 01:03:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:51.652679 | orchestrator | 2025-05-30 01:03:51 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:03:51.654704 | orchestrator | 2025-05-30 01:03:51 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:51.656682 | orchestrator | 2025-05-30 01:03:51 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:51.658050 | orchestrator | 2025-05-30 01:03:51 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:51.658080 | orchestrator | 2025-05-30 01:03:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:54.727118 | orchestrator | 2025-05-30 01:03:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:54.728060 | orchestrator | 2025-05-30 01:03:54 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:03:54.729884 | orchestrator | 2025-05-30 01:03:54 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:54.735937 | orchestrator | 2025-05-30 01:03:54 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:54.737298 | orchestrator | 2025-05-30 01:03:54 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:54.737332 | orchestrator | 2025-05-30 01:03:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:03:57.771582 | orchestrator | 2025-05-30 01:03:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:03:57.773060 | orchestrator | 2025-05-30 01:03:57 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:03:57.775081 | orchestrator | 2025-05-30 01:03:57 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:03:57.779236 | orchestrator | 2025-05-30 01:03:57 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:03:57.779275 | orchestrator | 2025-05-30 01:03:57 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:03:57.779448 | orchestrator | 2025-05-30 01:03:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:00.828548 | orchestrator | 2025-05-30 01:04:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:00.828886 | orchestrator | 2025-05-30 01:04:00 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:04:00.830336 | orchestrator | 2025-05-30 01:04:00 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:00.832061 | orchestrator | 2025-05-30 01:04:00 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:04:00.833329 | orchestrator | 2025-05-30 01:04:00 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:00.833363 | orchestrator | 2025-05-30 01:04:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:03.885623 | orchestrator | 2025-05-30 01:04:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:03.885738 | orchestrator | 2025-05-30 01:04:03 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:04:03.887913 | orchestrator | 2025-05-30 01:04:03 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:03.888509 | orchestrator | 2025-05-30 01:04:03 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state STARTED 2025-05-30 01:04:03.892005 | orchestrator | 2025-05-30 01:04:03 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:03.892058 | orchestrator | 2025-05-30 01:04:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:06.939625 | orchestrator | 2025-05-30 01:04:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:06.940173 | orchestrator | 2025-05-30 01:04:06 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:04:06.944786 | orchestrator | 2025-05-30 01:04:06 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:06.947913 | orchestrator | 2025-05-30 01:04:06 | INFO  | Task 9640f9c3-ad9f-464c-85eb-723a290a28c9 is in state SUCCESS 2025-05-30 01:04:06.949329 | orchestrator | 2025-05-30 01:04:06.949372 | orchestrator | 2025-05-30 01:04:06.949392 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:04:06.949411 | orchestrator | 2025-05-30 01:04:06.949461 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:04:06.949482 | orchestrator | Friday 30 May 2025 01:01:06 +0000 (0:00:00.331) 0:00:00.331 ************ 2025-05-30 01:04:06.949501 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:04:06.949515 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:04:06.949545 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:04:06.949556 | orchestrator | 2025-05-30 01:04:06.949567 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:04:06.949578 | orchestrator | Friday 30 May 2025 01:01:07 +0000 (0:00:00.348) 0:00:00.680 ************ 2025-05-30 01:04:06.949613 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-30 01:04:06.949625 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-30 01:04:06.949636 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-30 01:04:06.949646 | orchestrator | 2025-05-30 01:04:06.949657 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-30 01:04:06.949668 | orchestrator | 2025-05-30 01:04:06.949678 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-30 01:04:06.949689 | orchestrator | Friday 30 May 2025 01:01:07 +0000 (0:00:00.269) 0:00:00.950 ************ 2025-05-30 01:04:06.949700 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:04:06.949712 | orchestrator | 2025-05-30 01:04:06.949722 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-30 01:04:06.949733 | orchestrator | Friday 30 May 2025 01:01:08 +0000 (0:00:00.610) 0:00:01.560 ************ 2025-05-30 01:04:06.949744 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-30 01:04:06.949754 | orchestrator | 2025-05-30 01:04:06.949765 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-30 01:04:06.949776 | orchestrator | Friday 30 May 2025 01:01:11 +0000 (0:00:03.455) 0:00:05.016 ************ 2025-05-30 01:04:06.949787 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-30 01:04:06.949797 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-30 01:04:06.949808 | orchestrator | 2025-05-30 01:04:06.949819 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-30 01:04:06.949830 | orchestrator | Friday 30 May 2025 01:01:17 +0000 (0:00:06.146) 0:00:11.162 ************ 2025-05-30 01:04:06.949840 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-30 01:04:06.949852 | orchestrator | 2025-05-30 01:04:06.949863 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-30 01:04:06.949874 | orchestrator | Friday 30 May 2025 01:01:20 +0000 (0:00:03.270) 0:00:14.432 ************ 2025-05-30 01:04:06.949885 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-30 01:04:06.949895 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-30 01:04:06.949906 | orchestrator | 2025-05-30 01:04:06.949917 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-30 01:04:06.949928 | orchestrator | Friday 30 May 2025 01:01:24 +0000 (0:00:03.713) 0:00:18.145 ************ 2025-05-30 01:04:06.949938 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-30 01:04:06.949949 | orchestrator | 2025-05-30 01:04:06.949960 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-30 01:04:06.949971 | orchestrator | Friday 30 May 2025 01:01:27 +0000 (0:00:02.965) 0:00:21.111 ************ 2025-05-30 01:04:06.949982 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-30 01:04:06.949992 | orchestrator | 2025-05-30 01:04:06.950003 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-30 01:04:06.950014 | orchestrator | Friday 30 May 2025 01:01:31 +0000 (0:00:04.004) 0:00:25.115 ************ 2025-05-30 01:04:06.950080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.950129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.950143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.950155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.950616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.950636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.950666 | orchestrator | 2025-05-30 01:04:06.950678 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-30 01:04:06.950689 | orchestrator | Friday 30 May 2025 01:01:34 +0000 (0:00:03.143) 0:00:28.259 ************ 2025-05-30 01:04:06.950700 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:04:06.950711 | orchestrator | 2025-05-30 01:04:06.950722 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-30 01:04:06.950738 | orchestrator | Friday 30 May 2025 01:01:34 +0000 (0:00:00.127) 0:00:28.386 ************ 2025-05-30 01:04:06.950749 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:04:06.950760 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:04:06.950771 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:04:06.950782 | orchestrator | 2025-05-30 01:04:06.950792 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-30 01:04:06.950803 | orchestrator | Friday 30 May 2025 01:01:35 +0000 (0:00:00.496) 0:00:28.883 ************ 2025-05-30 01:04:06.950814 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:04:06.950825 | orchestrator | 2025-05-30 01:04:06.950836 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-30 01:04:06.950847 | orchestrator | Friday 30 May 2025 01:01:36 +0000 (0:00:00.744) 0:00:29.628 ************ 2025-05-30 01:04:06.950858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.950871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.950889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.950906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.950987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.951011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.951023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.951035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.951046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.951070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.951082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.951093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.951115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.951127 | orchestrator | 2025-05-30 01:04:06.951139 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-30 01:04:06.951150 | orchestrator | Friday 30 May 2025 01:01:42 +0000 (0:00:06.363) 0:00:35.991 ************ 2025-05-30 01:04:06.951161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.951172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 01:04:06.951190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.951201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.951213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.951839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.951864 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:04:06.951876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.951897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 01:04:06.951909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.951921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.951932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.951957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.951969 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:04:06.951980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.951998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 01:04:06.952009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952067 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:04:06.952078 | orchestrator | 2025-05-30 01:04:06.952089 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-30 01:04:06.952100 | orchestrator | Friday 30 May 2025 01:01:43 +0000 (0:00:01.149) 0:00:37.141 ************ 2025-05-30 01:04:06.952111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.952129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 01:04:06.952141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952197 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:04:06.952208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.952227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 01:04:06.952239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952296 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:04:06.952307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.952324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 01:04:06.952336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952389 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:04:06.952404 | orchestrator | 2025-05-30 01:04:06.952489 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-30 01:04:06.952513 | orchestrator | Friday 30 May 2025 01:01:45 +0000 (0:00:01.383) 0:00:38.524 ************ 2025-05-30 01:04:06.952526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.952540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.952554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.952568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.952985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.952995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953015 | orchestrator | 2025-05-30 01:04:06.953025 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-30 01:04:06.953035 | orchestrator | Friday 30 May 2025 01:01:52 +0000 (0:00:07.716) 0:00:46.241 ************ 2025-05-30 01:04:06.953045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.953055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.953082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.953093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953342 | orchestrator | 2025-05-30 01:04:06.953352 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-30 01:04:06.953362 | orchestrator | Friday 30 May 2025 01:02:15 +0000 (0:00:22.529) 0:01:08.771 ************ 2025-05-30 01:04:06.953371 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-30 01:04:06.953381 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-30 01:04:06.953391 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-30 01:04:06.953400 | orchestrator | 2025-05-30 01:04:06.953410 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-30 01:04:06.953448 | orchestrator | Friday 30 May 2025 01:02:22 +0000 (0:00:06.835) 0:01:15.606 ************ 2025-05-30 01:04:06.953458 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-30 01:04:06.953480 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-30 01:04:06.953493 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-30 01:04:06.953503 | orchestrator | 2025-05-30 01:04:06.953513 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-30 01:04:06.953527 | orchestrator | Friday 30 May 2025 01:02:27 +0000 (0:00:05.768) 0:01:21.375 ************ 2025-05-30 01:04:06.953537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.953548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.953558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.953575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.953857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.953867 | orchestrator | 2025-05-30 01:04:06.953877 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-30 01:04:06.953886 | orchestrator | Friday 30 May 2025 01:02:31 +0000 (0:00:03.527) 0:01:24.903 ************ 2025-05-30 01:04:06.953896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.953914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.954296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.954317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.954327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.954381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.954548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.954601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.954627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.954648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954657 | orchestrator | 2025-05-30 01:04:06.954666 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-30 01:04:06.954675 | orchestrator | Friday 30 May 2025 01:02:34 +0000 (0:00:02.785) 0:01:27.688 ************ 2025-05-30 01:04:06.954683 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:04:06.954690 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:04:06.954698 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:04:06.954706 | orchestrator | 2025-05-30 01:04:06.954714 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-30 01:04:06.954722 | orchestrator | Friday 30 May 2025 01:02:35 +0000 (0:00:01.125) 0:01:28.814 ************ 2025-05-30 01:04:06.954739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.954748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 01:04:06.954762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954807 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:04:06.954820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.954839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 01:04:06.954847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954899 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:04:06.954909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-30 01:04:06.954926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-30 01:04:06.954935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.954998 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:04:06.955008 | orchestrator | 2025-05-30 01:04:06.955017 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-30 01:04:06.955026 | orchestrator | Friday 30 May 2025 01:02:36 +0000 (0:00:01.376) 0:01:30.190 ************ 2025-05-30 01:04:06.955036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.955046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.955055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-30 01:04:06.955074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.955243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.955261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-30 01:04:06.955269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-30 01:04:06.955277 | orchestrator | 2025-05-30 01:04:06.955285 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-30 01:04:06.955293 | orchestrator | Friday 30 May 2025 01:02:41 +0000 (0:00:05.076) 0:01:35.267 ************ 2025-05-30 01:04:06.955301 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:04:06.955309 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:04:06.955317 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:04:06.955325 | orchestrator | 2025-05-30 01:04:06.955333 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-30 01:04:06.955341 | orchestrator | Friday 30 May 2025 01:02:42 +0000 (0:00:00.516) 0:01:35.783 ************ 2025-05-30 01:04:06.955349 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-30 01:04:06.955357 | orchestrator | 2025-05-30 01:04:06.955364 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-30 01:04:06.955372 | orchestrator | Friday 30 May 2025 01:02:44 +0000 (0:00:02.157) 0:01:37.940 ************ 2025-05-30 01:04:06.955380 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-30 01:04:06.955388 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-30 01:04:06.955401 | orchestrator | 2025-05-30 01:04:06.955409 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-30 01:04:06.955433 | orchestrator | Friday 30 May 2025 01:02:46 +0000 (0:00:02.354) 0:01:40.295 ************ 2025-05-30 01:04:06.955442 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:04:06.955450 | orchestrator | 2025-05-30 01:04:06.955457 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-30 01:04:06.955465 | orchestrator | Friday 30 May 2025 01:03:01 +0000 (0:00:14.714) 0:01:55.010 ************ 2025-05-30 01:04:06.955473 | orchestrator | 2025-05-30 01:04:06.955481 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-30 01:04:06.955488 | orchestrator | Friday 30 May 2025 01:03:01 +0000 (0:00:00.136) 0:01:55.146 ************ 2025-05-30 01:04:06.955496 | orchestrator | 2025-05-30 01:04:06.955504 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-30 01:04:06.955516 | orchestrator | Friday 30 May 2025 01:03:01 +0000 (0:00:00.113) 0:01:55.260 ************ 2025-05-30 01:04:06.955524 | orchestrator | 2025-05-30 01:04:06.955532 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-30 01:04:06.955539 | orchestrator | Friday 30 May 2025 01:03:01 +0000 (0:00:00.116) 0:01:55.376 ************ 2025-05-30 01:04:06.955547 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:04:06.955555 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:04:06.955563 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:04:06.955571 | orchestrator | 2025-05-30 01:04:06.955582 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-30 01:04:06.955590 | orchestrator | Friday 30 May 2025 01:03:15 +0000 (0:00:13.621) 0:02:08.997 ************ 2025-05-30 01:04:06.955598 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:04:06.955606 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:04:06.955614 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:04:06.955622 | orchestrator | 2025-05-30 01:04:06.955630 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-30 01:04:06.955638 | orchestrator | Friday 30 May 2025 01:03:26 +0000 (0:00:10.848) 0:02:19.846 ************ 2025-05-30 01:04:06.955645 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:04:06.955653 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:04:06.955661 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:04:06.955669 | orchestrator | 2025-05-30 01:04:06.955676 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-30 01:04:06.955684 | orchestrator | Friday 30 May 2025 01:03:33 +0000 (0:00:07.258) 0:02:27.105 ************ 2025-05-30 01:04:06.955692 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:04:06.955700 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:04:06.955708 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:04:06.955715 | orchestrator | 2025-05-30 01:04:06.955723 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-30 01:04:06.955731 | orchestrator | Friday 30 May 2025 01:03:45 +0000 (0:00:12.140) 0:02:39.245 ************ 2025-05-30 01:04:06.955739 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:04:06.955747 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:04:06.955754 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:04:06.955762 | orchestrator | 2025-05-30 01:04:06.955770 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-30 01:04:06.955778 | orchestrator | Friday 30 May 2025 01:03:54 +0000 (0:00:09.037) 0:02:48.282 ************ 2025-05-30 01:04:06.955785 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:04:06.955793 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:04:06.955801 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:04:06.955809 | orchestrator | 2025-05-30 01:04:06.955817 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-30 01:04:06.955824 | orchestrator | Friday 30 May 2025 01:04:01 +0000 (0:00:06.334) 0:02:54.616 ************ 2025-05-30 01:04:06.955832 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:04:06.955845 | orchestrator | 2025-05-30 01:04:06.955852 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:04:06.955861 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-30 01:04:06.955869 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-30 01:04:06.955877 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-30 01:04:06.955885 | orchestrator | 2025-05-30 01:04:06.955893 | orchestrator | 2025-05-30 01:04:06.955900 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:04:06.955908 | orchestrator | Friday 30 May 2025 01:04:06 +0000 (0:00:05.107) 0:02:59.724 ************ 2025-05-30 01:04:06.955916 | orchestrator | =============================================================================== 2025-05-30 01:04:06.955924 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.53s 2025-05-30 01:04:06.955931 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.71s 2025-05-30 01:04:06.955939 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.62s 2025-05-30 01:04:06.955947 | orchestrator | designate : Restart designate-producer container ----------------------- 12.14s 2025-05-30 01:04:06.955955 | orchestrator | designate : Restart designate-api container ---------------------------- 10.85s 2025-05-30 01:04:06.955963 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.04s 2025-05-30 01:04:06.955971 | orchestrator | designate : Copying over config.json files for services ----------------- 7.72s 2025-05-30 01:04:06.955978 | orchestrator | designate : Restart designate-central container ------------------------- 7.26s 2025-05-30 01:04:06.955986 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.84s 2025-05-30 01:04:06.955994 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.36s 2025-05-30 01:04:06.956002 | orchestrator | designate : Restart designate-worker container -------------------------- 6.33s 2025-05-30 01:04:06.956010 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.15s 2025-05-30 01:04:06.956017 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.77s 2025-05-30 01:04:06.956025 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.11s 2025-05-30 01:04:06.956033 | orchestrator | designate : Check designate containers ---------------------------------- 5.08s 2025-05-30 01:04:06.956041 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.00s 2025-05-30 01:04:06.956048 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.71s 2025-05-30 01:04:06.956060 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.53s 2025-05-30 01:04:06.956068 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.46s 2025-05-30 01:04:06.956076 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.27s 2025-05-30 01:04:06.956084 | orchestrator | 2025-05-30 01:04:06 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:06.956095 | orchestrator | 2025-05-30 01:04:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:10.010874 | orchestrator | 2025-05-30 01:04:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:10.012916 | orchestrator | 2025-05-30 01:04:10 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:10.015282 | orchestrator | 2025-05-30 01:04:10 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:04:10.016995 | orchestrator | 2025-05-30 01:04:10 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:10.018343 | orchestrator | 2025-05-30 01:04:10 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:10.018370 | orchestrator | 2025-05-30 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:13.073078 | orchestrator | 2025-05-30 01:04:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:13.078632 | orchestrator | 2025-05-30 01:04:13 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:13.080367 | orchestrator | 2025-05-30 01:04:13 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:04:13.081598 | orchestrator | 2025-05-30 01:04:13 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:13.083295 | orchestrator | 2025-05-30 01:04:13 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:13.083331 | orchestrator | 2025-05-30 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:16.127483 | orchestrator | 2025-05-30 01:04:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:16.128861 | orchestrator | 2025-05-30 01:04:16 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:16.129945 | orchestrator | 2025-05-30 01:04:16 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:04:16.131280 | orchestrator | 2025-05-30 01:04:16 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:16.132527 | orchestrator | 2025-05-30 01:04:16 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:16.132560 | orchestrator | 2025-05-30 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:19.183003 | orchestrator | 2025-05-30 01:04:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:19.183100 | orchestrator | 2025-05-30 01:04:19 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:19.183440 | orchestrator | 2025-05-30 01:04:19 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:04:19.184214 | orchestrator | 2025-05-30 01:04:19 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:19.185204 | orchestrator | 2025-05-30 01:04:19 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:19.185223 | orchestrator | 2025-05-30 01:04:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:22.230890 | orchestrator | 2025-05-30 01:04:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:22.232831 | orchestrator | 2025-05-30 01:04:22 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:22.235096 | orchestrator | 2025-05-30 01:04:22 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:04:22.237305 | orchestrator | 2025-05-30 01:04:22 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:22.238925 | orchestrator | 2025-05-30 01:04:22 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:22.238961 | orchestrator | 2025-05-30 01:04:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:25.297258 | orchestrator | 2025-05-30 01:04:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:25.297934 | orchestrator | 2025-05-30 01:04:25 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:25.298483 | orchestrator | 2025-05-30 01:04:25 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:04:25.302872 | orchestrator | 2025-05-30 01:04:25 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:25.303849 | orchestrator | 2025-05-30 01:04:25 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:25.303937 | orchestrator | 2025-05-30 01:04:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:28.342937 | orchestrator | 2025-05-30 01:04:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:28.344271 | orchestrator | 2025-05-30 01:04:28 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:28.345275 | orchestrator | 2025-05-30 01:04:28 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:04:28.352988 | orchestrator | 2025-05-30 01:04:28 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:28.357219 | orchestrator | 2025-05-30 01:04:28 | INFO  | Task 88859e9c-2cfb-4341-98d4-1029f2b9da03 is in state STARTED 2025-05-30 01:04:28.361537 | orchestrator | 2025-05-30 01:04:28 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:28.361569 | orchestrator | 2025-05-30 01:04:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:31.395476 | orchestrator | 2025-05-30 01:04:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:31.395676 | orchestrator | 2025-05-30 01:04:31 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:31.396137 | orchestrator | 2025-05-30 01:04:31 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:04:31.396682 | orchestrator | 2025-05-30 01:04:31 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:31.397171 | orchestrator | 2025-05-30 01:04:31 | INFO  | Task 88859e9c-2cfb-4341-98d4-1029f2b9da03 is in state STARTED 2025-05-30 01:04:31.398160 | orchestrator | 2025-05-30 01:04:31 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:31.398184 | orchestrator | 2025-05-30 01:04:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:34.451542 | orchestrator | 2025-05-30 01:04:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:34.454320 | orchestrator | 2025-05-30 01:04:34 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:34.455740 | orchestrator | 2025-05-30 01:04:34 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:04:34.457676 | orchestrator | 2025-05-30 01:04:34 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:34.459109 | orchestrator | 2025-05-30 01:04:34 | INFO  | Task 88859e9c-2cfb-4341-98d4-1029f2b9da03 is in state STARTED 2025-05-30 01:04:34.460562 | orchestrator | 2025-05-30 01:04:34 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:34.460947 | orchestrator | 2025-05-30 01:04:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:37.512204 | orchestrator | 2025-05-30 01:04:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:37.513565 | orchestrator | 2025-05-30 01:04:37 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:37.515035 | orchestrator | 2025-05-30 01:04:37 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state STARTED 2025-05-30 01:04:37.516257 | orchestrator | 2025-05-30 01:04:37 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:37.517113 | orchestrator | 2025-05-30 01:04:37 | INFO  | Task 88859e9c-2cfb-4341-98d4-1029f2b9da03 is in state SUCCESS 2025-05-30 01:04:37.519479 | orchestrator | 2025-05-30 01:04:37 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:37.519506 | orchestrator | 2025-05-30 01:04:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:40.577648 | orchestrator | 2025-05-30 01:04:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:40.579520 | orchestrator | 2025-05-30 01:04:40 | INFO  | Task f1a47042-8a89-4d67-8936-6108382708aa is in state STARTED 2025-05-30 01:04:40.581720 | orchestrator | 2025-05-30 01:04:40 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:40.583521 | orchestrator | 2025-05-30 01:04:40 | INFO  | Task ecbbbcb1-524d-4420-a775-955811dfb74c is in state SUCCESS 2025-05-30 01:04:40.584822 | orchestrator | 2025-05-30 01:04:40.584865 | orchestrator | None 2025-05-30 01:04:40.584878 | orchestrator | 2025-05-30 01:04:40.584889 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:04:40.584901 | orchestrator | 2025-05-30 01:04:40.584912 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:04:40.584942 | orchestrator | Friday 30 May 2025 01:03:26 +0000 (0:00:00.327) 0:00:00.327 ************ 2025-05-30 01:04:40.584953 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:04:40.584965 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:04:40.584976 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:04:40.584987 | orchestrator | 2025-05-30 01:04:40.584998 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:04:40.585008 | orchestrator | Friday 30 May 2025 01:03:26 +0000 (0:00:00.776) 0:00:01.103 ************ 2025-05-30 01:04:40.585020 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-30 01:04:40.585031 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-30 01:04:40.585042 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-30 01:04:40.585053 | orchestrator | 2025-05-30 01:04:40.585064 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-30 01:04:40.585083 | orchestrator | 2025-05-30 01:04:40.585132 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-30 01:04:40.585152 | orchestrator | Friday 30 May 2025 01:03:27 +0000 (0:00:00.782) 0:00:01.886 ************ 2025-05-30 01:04:40.585194 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:04:40.585207 | orchestrator | 2025-05-30 01:04:40.585218 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-30 01:04:40.585229 | orchestrator | Friday 30 May 2025 01:03:28 +0000 (0:00:00.964) 0:00:02.851 ************ 2025-05-30 01:04:40.585240 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-30 01:04:40.585251 | orchestrator | 2025-05-30 01:04:40.585262 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-30 01:04:40.585273 | orchestrator | Friday 30 May 2025 01:03:32 +0000 (0:00:03.391) 0:00:06.242 ************ 2025-05-30 01:04:40.585284 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-30 01:04:40.585295 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-30 01:04:40.585306 | orchestrator | 2025-05-30 01:04:40.585317 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-30 01:04:40.585328 | orchestrator | Friday 30 May 2025 01:03:38 +0000 (0:00:06.318) 0:00:12.561 ************ 2025-05-30 01:04:40.585339 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-30 01:04:40.585350 | orchestrator | 2025-05-30 01:04:40.585417 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-30 01:04:40.585431 | orchestrator | Friday 30 May 2025 01:03:41 +0000 (0:00:03.092) 0:00:15.653 ************ 2025-05-30 01:04:40.585444 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-30 01:04:40.585456 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-30 01:04:40.585470 | orchestrator | 2025-05-30 01:04:40.585482 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-30 01:04:40.585495 | orchestrator | Friday 30 May 2025 01:03:45 +0000 (0:00:03.795) 0:00:19.449 ************ 2025-05-30 01:04:40.585508 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-30 01:04:40.585521 | orchestrator | 2025-05-30 01:04:40.585534 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-30 01:04:40.585546 | orchestrator | Friday 30 May 2025 01:03:48 +0000 (0:00:03.175) 0:00:22.624 ************ 2025-05-30 01:04:40.585558 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-30 01:04:40.585571 | orchestrator | 2025-05-30 01:04:40.585584 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-30 01:04:40.585596 | orchestrator | Friday 30 May 2025 01:03:52 +0000 (0:00:04.176) 0:00:26.800 ************ 2025-05-30 01:04:40.585609 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:04:40.585622 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:04:40.585635 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:04:40.585648 | orchestrator | 2025-05-30 01:04:40.585660 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-30 01:04:40.585673 | orchestrator | Friday 30 May 2025 01:03:53 +0000 (0:00:00.418) 0:00:27.218 ************ 2025-05-30 01:04:40.585690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.585730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.585746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.585767 | orchestrator | 2025-05-30 01:04:40.585778 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-30 01:04:40.585789 | orchestrator | Friday 30 May 2025 01:03:54 +0000 (0:00:01.210) 0:00:28.429 ************ 2025-05-30 01:04:40.585801 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:04:40.585811 | orchestrator | 2025-05-30 01:04:40.585822 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-30 01:04:40.585833 | orchestrator | Friday 30 May 2025 01:03:54 +0000 (0:00:00.127) 0:00:28.557 ************ 2025-05-30 01:04:40.585844 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:04:40.585855 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:04:40.585866 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:04:40.585900 | orchestrator | 2025-05-30 01:04:40.585911 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-30 01:04:40.585922 | orchestrator | Friday 30 May 2025 01:03:54 +0000 (0:00:00.307) 0:00:28.864 ************ 2025-05-30 01:04:40.585933 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:04:40.585944 | orchestrator | 2025-05-30 01:04:40.585955 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-30 01:04:40.585966 | orchestrator | Friday 30 May 2025 01:03:55 +0000 (0:00:00.947) 0:00:29.811 ************ 2025-05-30 01:04:40.585977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.586005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.586072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.586097 | orchestrator | 2025-05-30 01:04:40.586108 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-30 01:04:40.586119 | orchestrator | Friday 30 May 2025 01:03:57 +0000 (0:00:02.239) 0:00:32.050 ************ 2025-05-30 01:04:40.586130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-30 01:04:40.586142 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:04:40.586154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-30 01:04:40.586165 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:04:40.586192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-30 01:04:40.586239 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:04:40.586259 | orchestrator | 2025-05-30 01:04:40.586277 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-30 01:04:40.586305 | orchestrator | Friday 30 May 2025 01:03:58 +0000 (0:00:00.457) 0:00:32.507 ************ 2025-05-30 01:04:40.586324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-30 01:04:40.586344 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:04:40.586389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-30 01:04:40.586411 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:04:40.586469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-30 01:04:40.586484 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:04:40.586495 | orchestrator | 2025-05-30 01:04:40.586507 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-30 01:04:40.586518 | orchestrator | Friday 30 May 2025 01:03:59 +0000 (0:00:00.967) 0:00:33.475 ************ 2025-05-30 01:04:40.586599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.586623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.586635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.586646 | orchestrator | 2025-05-30 01:04:40.586657 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-30 01:04:40.586668 | orchestrator | Friday 30 May 2025 01:04:01 +0000 (0:00:01.724) 0:00:35.199 ************ 2025-05-30 01:04:40.586680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.586692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.586724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.586737 | orchestrator | 2025-05-30 01:04:40.586748 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-30 01:04:40.586759 | orchestrator | Friday 30 May 2025 01:04:03 +0000 (0:00:02.414) 0:00:37.613 ************ 2025-05-30 01:04:40.586770 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-30 01:04:40.586781 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-30 01:04:40.586792 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-30 01:04:40.586803 | orchestrator | 2025-05-30 01:04:40.586814 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-30 01:04:40.586825 | orchestrator | Friday 30 May 2025 01:04:05 +0000 (0:00:01.816) 0:00:39.430 ************ 2025-05-30 01:04:40.586836 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:04:40.586847 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:04:40.586857 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:04:40.586868 | orchestrator | 2025-05-30 01:04:40.586879 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-30 01:04:40.586890 | orchestrator | Friday 30 May 2025 01:04:07 +0000 (0:00:01.727) 0:00:41.158 ************ 2025-05-30 01:04:40.586901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-30 01:04:40.586913 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:04:40.586924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-30 01:04:40.586942 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:04:40.586972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-30 01:04:40.586984 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:04:40.586995 | orchestrator | 2025-05-30 01:04:40.587006 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-30 01:04:40.587017 | orchestrator | Friday 30 May 2025 01:04:07 +0000 (0:00:00.795) 0:00:41.954 ************ 2025-05-30 01:04:40.587028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.587040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.587052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-30 01:04:40.587069 | orchestrator | 2025-05-30 01:04:40.587080 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-30 01:04:40.587091 | orchestrator | Friday 30 May 2025 01:04:09 +0000 (0:00:01.300) 0:00:43.255 ************ 2025-05-30 01:04:40.587102 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:04:40.587113 | orchestrator | 2025-05-30 01:04:40.587123 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-30 01:04:40.587134 | orchestrator | Friday 30 May 2025 01:04:11 +0000 (0:00:02.485) 0:00:45.740 ************ 2025-05-30 01:04:40.587145 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:04:40.587156 | orchestrator | 2025-05-30 01:04:40.587167 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-30 01:04:40.587177 | orchestrator | Friday 30 May 2025 01:04:14 +0000 (0:00:02.417) 0:00:48.157 ************ 2025-05-30 01:04:40.587194 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:04:40.587206 | orchestrator | 2025-05-30 01:04:40.587217 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-30 01:04:40.587228 | orchestrator | Friday 30 May 2025 01:04:26 +0000 (0:00:12.169) 0:01:00.327 ************ 2025-05-30 01:04:40.587238 | orchestrator | 2025-05-30 01:04:40.587254 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-30 01:04:40.587265 | orchestrator | Friday 30 May 2025 01:04:26 +0000 (0:00:00.139) 0:01:00.466 ************ 2025-05-30 01:04:40.587276 | orchestrator | 2025-05-30 01:04:40.587287 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-30 01:04:40.587298 | orchestrator | Friday 30 May 2025 01:04:26 +0000 (0:00:00.221) 0:01:00.688 ************ 2025-05-30 01:04:40.587308 | orchestrator | 2025-05-30 01:04:40.587319 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-30 01:04:40.587330 | orchestrator | Friday 30 May 2025 01:04:26 +0000 (0:00:00.134) 0:01:00.822 ************ 2025-05-30 01:04:40.587341 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:04:40.587352 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:04:40.587400 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:04:40.587413 | orchestrator | 2025-05-30 01:04:40.587424 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:04:40.587436 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-30 01:04:40.587449 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-30 01:04:40.587460 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-30 01:04:40.587471 | orchestrator | 2025-05-30 01:04:40.587482 | orchestrator | 2025-05-30 01:04:40.587493 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:04:40.587503 | orchestrator | Friday 30 May 2025 01:04:38 +0000 (0:00:11.545) 0:01:12.368 ************ 2025-05-30 01:04:40.587514 | orchestrator | =============================================================================== 2025-05-30 01:04:40.587525 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.17s 2025-05-30 01:04:40.587536 | orchestrator | placement : Restart placement-api container ---------------------------- 11.55s 2025-05-30 01:04:40.587546 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.32s 2025-05-30 01:04:40.587557 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.18s 2025-05-30 01:04:40.587575 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.80s 2025-05-30 01:04:40.587586 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.39s 2025-05-30 01:04:40.587597 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.18s 2025-05-30 01:04:40.587607 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.09s 2025-05-30 01:04:40.587618 | orchestrator | placement : Creating placement databases -------------------------------- 2.49s 2025-05-30 01:04:40.587629 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.42s 2025-05-30 01:04:40.587639 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.41s 2025-05-30 01:04:40.587650 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.24s 2025-05-30 01:04:40.587661 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.82s 2025-05-30 01:04:40.587671 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.73s 2025-05-30 01:04:40.587682 | orchestrator | placement : Copying over config.json files for services ----------------- 1.72s 2025-05-30 01:04:40.587693 | orchestrator | placement : Check placement containers ---------------------------------- 1.30s 2025-05-30 01:04:40.587703 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.21s 2025-05-30 01:04:40.587714 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.97s 2025-05-30 01:04:40.587725 | orchestrator | placement : include_tasks ----------------------------------------------- 0.96s 2025-05-30 01:04:40.587735 | orchestrator | placement : include_tasks ----------------------------------------------- 0.95s 2025-05-30 01:04:40.587746 | orchestrator | 2025-05-30 01:04:40 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:40.587757 | orchestrator | 2025-05-30 01:04:40 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:40.587768 | orchestrator | 2025-05-30 01:04:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:43.647996 | orchestrator | 2025-05-30 01:04:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:43.649987 | orchestrator | 2025-05-30 01:04:43 | INFO  | Task f1a47042-8a89-4d67-8936-6108382708aa is in state SUCCESS 2025-05-30 01:04:43.651831 | orchestrator | 2025-05-30 01:04:43 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:43.654058 | orchestrator | 2025-05-30 01:04:43 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:43.655807 | orchestrator | 2025-05-30 01:04:43 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:43.656303 | orchestrator | 2025-05-30 01:04:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:46.729408 | orchestrator | 2025-05-30 01:04:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:46.730737 | orchestrator | 2025-05-30 01:04:46 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:46.732126 | orchestrator | 2025-05-30 01:04:46 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:46.733650 | orchestrator | 2025-05-30 01:04:46 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:46.734822 | orchestrator | 2025-05-30 01:04:46 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:04:46.734849 | orchestrator | 2025-05-30 01:04:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:49.782262 | orchestrator | 2025-05-30 01:04:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:49.782719 | orchestrator | 2025-05-30 01:04:49 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:49.784051 | orchestrator | 2025-05-30 01:04:49 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:49.785840 | orchestrator | 2025-05-30 01:04:49 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:49.786104 | orchestrator | 2025-05-30 01:04:49 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:04:49.786195 | orchestrator | 2025-05-30 01:04:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:52.832344 | orchestrator | 2025-05-30 01:04:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:52.832482 | orchestrator | 2025-05-30 01:04:52 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:52.832495 | orchestrator | 2025-05-30 01:04:52 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:52.832505 | orchestrator | 2025-05-30 01:04:52 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:52.832515 | orchestrator | 2025-05-30 01:04:52 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:04:52.832524 | orchestrator | 2025-05-30 01:04:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:55.847494 | orchestrator | 2025-05-30 01:04:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:55.847714 | orchestrator | 2025-05-30 01:04:55 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:55.848063 | orchestrator | 2025-05-30 01:04:55 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:55.848649 | orchestrator | 2025-05-30 01:04:55 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:55.850430 | orchestrator | 2025-05-30 01:04:55 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:04:55.850582 | orchestrator | 2025-05-30 01:04:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:04:58.877924 | orchestrator | 2025-05-30 01:04:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:04:58.878080 | orchestrator | 2025-05-30 01:04:58 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:04:58.878097 | orchestrator | 2025-05-30 01:04:58 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:04:58.878395 | orchestrator | 2025-05-30 01:04:58 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:04:58.882202 | orchestrator | 2025-05-30 01:04:58 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:04:58.882251 | orchestrator | 2025-05-30 01:04:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:01.917093 | orchestrator | 2025-05-30 01:05:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:01.919171 | orchestrator | 2025-05-30 01:05:01 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:01.919559 | orchestrator | 2025-05-30 01:05:01 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:05:01.920318 | orchestrator | 2025-05-30 01:05:01 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:01.921068 | orchestrator | 2025-05-30 01:05:01 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:01.921202 | orchestrator | 2025-05-30 01:05:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:04.957264 | orchestrator | 2025-05-30 01:05:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:04.957543 | orchestrator | 2025-05-30 01:05:04 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:04.958251 | orchestrator | 2025-05-30 01:05:04 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:05:04.959114 | orchestrator | 2025-05-30 01:05:04 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:04.959640 | orchestrator | 2025-05-30 01:05:04 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:04.959845 | orchestrator | 2025-05-30 01:05:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:08.001027 | orchestrator | 2025-05-30 01:05:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:08.001164 | orchestrator | 2025-05-30 01:05:07 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:08.001260 | orchestrator | 2025-05-30 01:05:07 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:05:08.004364 | orchestrator | 2025-05-30 01:05:08 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:08.004685 | orchestrator | 2025-05-30 01:05:08 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:08.004714 | orchestrator | 2025-05-30 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:11.035563 | orchestrator | 2025-05-30 01:05:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:11.035653 | orchestrator | 2025-05-30 01:05:11 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:11.036956 | orchestrator | 2025-05-30 01:05:11 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:05:11.037456 | orchestrator | 2025-05-30 01:05:11 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:11.038147 | orchestrator | 2025-05-30 01:05:11 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:11.038189 | orchestrator | 2025-05-30 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:14.079628 | orchestrator | 2025-05-30 01:05:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:14.080738 | orchestrator | 2025-05-30 01:05:14 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:14.082967 | orchestrator | 2025-05-30 01:05:14 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:05:14.084751 | orchestrator | 2025-05-30 01:05:14 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:14.086136 | orchestrator | 2025-05-30 01:05:14 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:14.087371 | orchestrator | 2025-05-30 01:05:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:17.137700 | orchestrator | 2025-05-30 01:05:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:17.137920 | orchestrator | 2025-05-30 01:05:17 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:17.138755 | orchestrator | 2025-05-30 01:05:17 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:05:17.139452 | orchestrator | 2025-05-30 01:05:17 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:17.140261 | orchestrator | 2025-05-30 01:05:17 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:17.140286 | orchestrator | 2025-05-30 01:05:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:20.179956 | orchestrator | 2025-05-30 01:05:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:20.180144 | orchestrator | 2025-05-30 01:05:20 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:20.181620 | orchestrator | 2025-05-30 01:05:20 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:05:20.181653 | orchestrator | 2025-05-30 01:05:20 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:20.182246 | orchestrator | 2025-05-30 01:05:20 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:20.182291 | orchestrator | 2025-05-30 01:05:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:23.213061 | orchestrator | 2025-05-30 01:05:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:23.213243 | orchestrator | 2025-05-30 01:05:23 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:23.213574 | orchestrator | 2025-05-30 01:05:23 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:05:23.214741 | orchestrator | 2025-05-30 01:05:23 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:23.214778 | orchestrator | 2025-05-30 01:05:23 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:23.214790 | orchestrator | 2025-05-30 01:05:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:26.238640 | orchestrator | 2025-05-30 01:05:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:26.238749 | orchestrator | 2025-05-30 01:05:26 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:26.238938 | orchestrator | 2025-05-30 01:05:26 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:05:26.239756 | orchestrator | 2025-05-30 01:05:26 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:26.240015 | orchestrator | 2025-05-30 01:05:26 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:26.240040 | orchestrator | 2025-05-30 01:05:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:29.264974 | orchestrator | 2025-05-30 01:05:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:29.265084 | orchestrator | 2025-05-30 01:05:29 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:29.265206 | orchestrator | 2025-05-30 01:05:29 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:05:29.265786 | orchestrator | 2025-05-30 01:05:29 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:29.269146 | orchestrator | 2025-05-30 01:05:29 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:29.269172 | orchestrator | 2025-05-30 01:05:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:32.310554 | orchestrator | 2025-05-30 01:05:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:32.311695 | orchestrator | 2025-05-30 01:05:32 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:32.314607 | orchestrator | 2025-05-30 01:05:32 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:05:32.315601 | orchestrator | 2025-05-30 01:05:32 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:32.316893 | orchestrator | 2025-05-30 01:05:32 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:32.316924 | orchestrator | 2025-05-30 01:05:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:35.374980 | orchestrator | 2025-05-30 01:05:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:35.377147 | orchestrator | 2025-05-30 01:05:35 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:35.379093 | orchestrator | 2025-05-30 01:05:35 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:05:35.381953 | orchestrator | 2025-05-30 01:05:35 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:35.382733 | orchestrator | 2025-05-30 01:05:35 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:35.383001 | orchestrator | 2025-05-30 01:05:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:38.438872 | orchestrator | 2025-05-30 01:05:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:38.441335 | orchestrator | 2025-05-30 01:05:38 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:38.443778 | orchestrator | 2025-05-30 01:05:38 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state STARTED 2025-05-30 01:05:38.446565 | orchestrator | 2025-05-30 01:05:38 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:38.449334 | orchestrator | 2025-05-30 01:05:38 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:38.449421 | orchestrator | 2025-05-30 01:05:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:41.489274 | orchestrator | 2025-05-30 01:05:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:41.491549 | orchestrator | 2025-05-30 01:05:41 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:41.502947 | orchestrator | 2025-05-30 01:05:41 | INFO  | Task e3c2cb5f-f90f-4214-9548-9807be069bfc is in state SUCCESS 2025-05-30 01:05:41.504351 | orchestrator | 2025-05-30 01:05:41.504388 | orchestrator | 2025-05-30 01:05:41.504401 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:05:41.504412 | orchestrator | 2025-05-30 01:05:41.504424 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:05:41.504435 | orchestrator | Friday 30 May 2025 01:04:41 +0000 (0:00:00.219) 0:00:00.219 ************ 2025-05-30 01:05:41.504446 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:05:41.504509 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:05:41.504524 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:05:41.504535 | orchestrator | 2025-05-30 01:05:41.504547 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:05:41.504586 | orchestrator | Friday 30 May 2025 01:04:41 +0000 (0:00:00.401) 0:00:00.621 ************ 2025-05-30 01:05:41.504599 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-30 01:05:41.504611 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-30 01:05:41.504649 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-30 01:05:41.504714 | orchestrator | 2025-05-30 01:05:41.504784 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-30 01:05:41.504823 | orchestrator | 2025-05-30 01:05:41.504835 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-30 01:05:41.504845 | orchestrator | Friday 30 May 2025 01:04:42 +0000 (0:00:00.484) 0:00:01.105 ************ 2025-05-30 01:05:41.504856 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:05:41.504868 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:05:41.504879 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:05:41.504890 | orchestrator | 2025-05-30 01:05:41.504901 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:05:41.504912 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:05:41.504925 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:05:41.504936 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:05:41.504947 | orchestrator | 2025-05-30 01:05:41.504957 | orchestrator | 2025-05-30 01:05:41.504983 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:05:41.504995 | orchestrator | Friday 30 May 2025 01:04:43 +0000 (0:00:00.795) 0:00:01.901 ************ 2025-05-30 01:05:41.505006 | orchestrator | =============================================================================== 2025-05-30 01:05:41.505017 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.80s 2025-05-30 01:05:41.505028 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2025-05-30 01:05:41.505038 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-05-30 01:05:41.505049 | orchestrator | 2025-05-30 01:05:41.505060 | orchestrator | 2025-05-30 01:05:41.505082 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:05:41.505094 | orchestrator | 2025-05-30 01:05:41.505105 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:05:41.505138 | orchestrator | Friday 30 May 2025 01:01:06 +0000 (0:00:00.336) 0:00:00.336 ************ 2025-05-30 01:05:41.505150 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:05:41.505160 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:05:41.505171 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:05:41.505182 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:05:41.505193 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:05:41.505204 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:05:41.505214 | orchestrator | 2025-05-30 01:05:41.505225 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:05:41.505236 | orchestrator | Friday 30 May 2025 01:01:07 +0000 (0:00:00.773) 0:00:01.110 ************ 2025-05-30 01:05:41.505247 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-30 01:05:41.505258 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-30 01:05:41.505269 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-30 01:05:41.505298 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-30 01:05:41.505310 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-30 01:05:41.505320 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-30 01:05:41.505349 | orchestrator | 2025-05-30 01:05:41.505360 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-30 01:05:41.505371 | orchestrator | 2025-05-30 01:05:41.505382 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-30 01:05:41.505393 | orchestrator | Friday 30 May 2025 01:01:07 +0000 (0:00:00.591) 0:00:01.701 ************ 2025-05-30 01:05:41.505404 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 01:05:41.505416 | orchestrator | 2025-05-30 01:05:41.505427 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-30 01:05:41.505450 | orchestrator | Friday 30 May 2025 01:01:08 +0000 (0:00:00.925) 0:00:02.626 ************ 2025-05-30 01:05:41.505475 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:05:41.505486 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:05:41.505497 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:05:41.505508 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:05:41.505519 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:05:41.505530 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:05:41.505553 | orchestrator | 2025-05-30 01:05:41.505584 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-30 01:05:41.505596 | orchestrator | Friday 30 May 2025 01:01:09 +0000 (0:00:01.071) 0:00:03.698 ************ 2025-05-30 01:05:41.505607 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:05:41.505618 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:05:41.505629 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:05:41.505640 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:05:41.505650 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:05:41.505674 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:05:41.505686 | orchestrator | 2025-05-30 01:05:41.505696 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-30 01:05:41.505707 | orchestrator | Friday 30 May 2025 01:01:10 +0000 (0:00:01.042) 0:00:04.741 ************ 2025-05-30 01:05:41.505718 | orchestrator | ok: [testbed-node-0] => { 2025-05-30 01:05:41.505777 | orchestrator |  "changed": false, 2025-05-30 01:05:41.505789 | orchestrator |  "msg": "All assertions passed" 2025-05-30 01:05:41.505800 | orchestrator | } 2025-05-30 01:05:41.505812 | orchestrator | ok: [testbed-node-1] => { 2025-05-30 01:05:41.505822 | orchestrator |  "changed": false, 2025-05-30 01:05:41.505833 | orchestrator |  "msg": "All assertions passed" 2025-05-30 01:05:41.505844 | orchestrator | } 2025-05-30 01:05:41.505855 | orchestrator | ok: [testbed-node-2] => { 2025-05-30 01:05:41.505866 | orchestrator |  "changed": false, 2025-05-30 01:05:41.505876 | orchestrator |  "msg": "All assertions passed" 2025-05-30 01:05:41.505887 | orchestrator | } 2025-05-30 01:05:41.505898 | orchestrator | ok: [testbed-node-3] => { 2025-05-30 01:05:41.505909 | orchestrator |  "changed": false, 2025-05-30 01:05:41.505920 | orchestrator |  "msg": "All assertions passed" 2025-05-30 01:05:41.505930 | orchestrator | } 2025-05-30 01:05:41.505941 | orchestrator | ok: [testbed-node-4] => { 2025-05-30 01:05:41.505952 | orchestrator |  "changed": false, 2025-05-30 01:05:41.505963 | orchestrator |  "msg": "All assertions passed" 2025-05-30 01:05:41.505973 | orchestrator | } 2025-05-30 01:05:41.505984 | orchestrator | ok: [testbed-node-5] => { 2025-05-30 01:05:41.505995 | orchestrator |  "changed": false, 2025-05-30 01:05:41.506006 | orchestrator |  "msg": "All assertions passed" 2025-05-30 01:05:41.506070 | orchestrator | } 2025-05-30 01:05:41.506085 | orchestrator | 2025-05-30 01:05:41.506097 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-30 01:05:41.506108 | orchestrator | Friday 30 May 2025 01:01:11 +0000 (0:00:00.565) 0:00:05.306 ************ 2025-05-30 01:05:41.506119 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.506129 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.506140 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.506151 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.506162 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.506173 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.506183 | orchestrator | 2025-05-30 01:05:41.506194 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-30 01:05:41.506205 | orchestrator | Friday 30 May 2025 01:01:12 +0000 (0:00:00.672) 0:00:05.979 ************ 2025-05-30 01:05:41.506216 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-30 01:05:41.506227 | orchestrator | 2025-05-30 01:05:41.506238 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-30 01:05:41.506249 | orchestrator | Friday 30 May 2025 01:01:15 +0000 (0:00:03.254) 0:00:09.233 ************ 2025-05-30 01:05:41.506260 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-30 01:05:41.506310 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-30 01:05:41.506324 | orchestrator | 2025-05-30 01:05:41.506335 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-30 01:05:41.506345 | orchestrator | Friday 30 May 2025 01:01:21 +0000 (0:00:06.089) 0:00:15.323 ************ 2025-05-30 01:05:41.506356 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-30 01:05:41.506367 | orchestrator | 2025-05-30 01:05:41.506378 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-30 01:05:41.506389 | orchestrator | Friday 30 May 2025 01:01:24 +0000 (0:00:03.127) 0:00:18.451 ************ 2025-05-30 01:05:41.506400 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-30 01:05:41.506411 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-30 01:05:41.506421 | orchestrator | 2025-05-30 01:05:41.506432 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-30 01:05:41.506443 | orchestrator | Friday 30 May 2025 01:01:28 +0000 (0:00:03.606) 0:00:22.057 ************ 2025-05-30 01:05:41.506457 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-30 01:05:41.506475 | orchestrator | 2025-05-30 01:05:41.506492 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-30 01:05:41.506510 | orchestrator | Friday 30 May 2025 01:01:31 +0000 (0:00:03.137) 0:00:25.195 ************ 2025-05-30 01:05:41.506528 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-30 01:05:41.506546 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-30 01:05:41.506565 | orchestrator | 2025-05-30 01:05:41.506576 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-30 01:05:41.506587 | orchestrator | Friday 30 May 2025 01:01:39 +0000 (0:00:07.905) 0:00:33.101 ************ 2025-05-30 01:05:41.506598 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.506609 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.506619 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.506630 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.506641 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.506651 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.506662 | orchestrator | 2025-05-30 01:05:41.506673 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-30 01:05:41.506684 | orchestrator | Friday 30 May 2025 01:01:40 +0000 (0:00:00.933) 0:00:34.034 ************ 2025-05-30 01:05:41.506694 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.506714 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.506725 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.506735 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.506746 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.506757 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.506767 | orchestrator | 2025-05-30 01:05:41.506778 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-30 01:05:41.506789 | orchestrator | Friday 30 May 2025 01:01:43 +0000 (0:00:03.009) 0:00:37.044 ************ 2025-05-30 01:05:41.506800 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:05:41.506811 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:05:41.506821 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:05:41.506832 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:05:41.506843 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:05:41.506872 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:05:41.506915 | orchestrator | 2025-05-30 01:05:41.506927 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-30 01:05:41.506968 | orchestrator | Friday 30 May 2025 01:01:45 +0000 (0:00:01.751) 0:00:38.796 ************ 2025-05-30 01:05:41.506980 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.506991 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.507022 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.507033 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.507044 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.507077 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.507088 | orchestrator | 2025-05-30 01:05:41.507099 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-30 01:05:41.507110 | orchestrator | Friday 30 May 2025 01:01:48 +0000 (0:00:03.358) 0:00:42.154 ************ 2025-05-30 01:05:41.507125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.507141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.507215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.507275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.507361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.507382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.507453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.507507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.507520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.507544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.507555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.507583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.507621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.507631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.507642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.507690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.507700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.507721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.507736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.507772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.507828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.507839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.507870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.507906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.507937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.507968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.507984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.507995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.508005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.508016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.508026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.508076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.508086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.508107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.508190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.508216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.508226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.508257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.508394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.508423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.508433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508443 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.508459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.508485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.508500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.508523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.508538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.508568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.508589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.508599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.508615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.508631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.509057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.509074 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.509084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.509102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.509112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.509122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.509148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.509159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.509169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.509186 | orchestrator | 2025-05-30 01:05:41.509197 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-30 01:05:41.509207 | orchestrator | Friday 30 May 2025 01:01:51 +0000 (0:00:03.159) 0:00:45.314 ************ 2025-05-30 01:05:41.509217 | orchestrator | [WARNING]: Skipped 2025-05-30 01:05:41.509227 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-30 01:05:41.509237 | orchestrator | due to this access issue: 2025-05-30 01:05:41.509247 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-30 01:05:41.509257 | orchestrator | a directory 2025-05-30 01:05:41.509370 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-30 01:05:41.509385 | orchestrator | 2025-05-30 01:05:41.509395 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-30 01:05:41.509404 | orchestrator | Friday 30 May 2025 01:01:52 +0000 (0:00:00.715) 0:00:46.029 ************ 2025-05-30 01:05:41.509428 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 01:05:41.509439 | orchestrator | 2025-05-30 01:05:41.509449 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-30 01:05:41.509458 | orchestrator | Friday 30 May 2025 01:01:53 +0000 (0:00:01.555) 0:00:47.585 ************ 2025-05-30 01:05:41.509469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.509569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.509584 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.509603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.509614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.509624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.509634 | orchestrator | 2025-05-30 01:05:41.509648 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-30 01:05:41.509658 | orchestrator | Friday 30 May 2025 01:01:58 +0000 (0:00:04.266) 0:00:51.852 ************ 2025-05-30 01:05:41.509676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.509695 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.509759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.509798 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.509810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.509819 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.509829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.509839 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.509923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.509934 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.509949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.509965 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.509974 | orchestrator | 2025-05-30 01:05:41.509983 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-30 01:05:41.509993 | orchestrator | Friday 30 May 2025 01:02:01 +0000 (0:00:03.851) 0:00:55.704 ************ 2025-05-30 01:05:41.510002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.510012 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.510047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.510056 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.510068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.510077 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.510091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.510105 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.510113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.510121 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.510129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.510138 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.510146 | orchestrator | 2025-05-30 01:05:41.510153 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-30 01:05:41.510162 | orchestrator | Friday 30 May 2025 01:02:06 +0000 (0:00:04.519) 0:01:00.223 ************ 2025-05-30 01:05:41.510170 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.510177 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.510185 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.510193 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.510201 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.510209 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.510217 | orchestrator | 2025-05-30 01:05:41.510236 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-30 01:05:41.510245 | orchestrator | Friday 30 May 2025 01:02:10 +0000 (0:00:03.715) 0:01:03.938 ************ 2025-05-30 01:05:41.510253 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.510261 | orchestrator | 2025-05-30 01:05:41.510269 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-30 01:05:41.510277 | orchestrator | Friday 30 May 2025 01:02:10 +0000 (0:00:00.109) 0:01:04.048 ************ 2025-05-30 01:05:41.510306 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.510314 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.510322 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.510330 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.510338 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.510346 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.510354 | orchestrator | 2025-05-30 01:05:41.510362 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-30 01:05:41.510370 | orchestrator | Friday 30 May 2025 01:02:11 +0000 (0:00:01.177) 0:01:05.225 ************ 2025-05-30 01:05:41.510388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.510402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.510447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.510469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.510477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.510506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.510572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.510587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.510615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.510625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.510648 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.510666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.510740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.510765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.510774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.510790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.510812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.510824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.510846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.510854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510867 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.510875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.510887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.510931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.510951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.510964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.510981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.510994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.511002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.511010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.511049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.511059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.511067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.511081 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.511089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.511153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.511181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.511197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.511211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.511238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.511247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.511256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.511268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.511838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.511859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.511875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.511884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.511892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.511946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.511980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.511990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512005 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.512014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.512022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.512569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.512591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.512600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.512647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.512679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.512687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.512714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.512724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512737 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.512745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.512754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.512857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.512874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.512882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.512908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.512930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.512938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.512963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.512976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.512989 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.512997 | orchestrator | 2025-05-30 01:05:41.513006 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-30 01:05:41.513014 | orchestrator | Friday 30 May 2025 01:02:15 +0000 (0:00:04.204) 0:01:09.429 ************ 2025-05-30 01:05:41.513022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.513031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.513080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.513096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.513105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.513130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.513154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.513162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.513179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.513374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.513404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.513458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.513479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.513488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.513511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.513565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.513592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.513605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.513623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.513671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.513688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.513696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.513740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.513765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.513807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.513819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.513845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.513880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.513890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.513923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.513930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.513952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.513985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.513996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.514004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.514520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.514543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.514551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.514558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.514574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.514581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.514621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.514631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.514638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.514645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.514657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.514664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.514675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.514724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.514734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.514742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.514757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.514768 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.514814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.514823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.514830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.514837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.514849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.514857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.515000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.515013 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.515020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.515035 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.515042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.515050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.515102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.515113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.515120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.515132 | orchestrator | 2025-05-30 01:05:41.515139 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-30 01:05:41.515147 | orchestrator | Friday 30 May 2025 01:02:20 +0000 (0:00:05.119) 0:01:14.549 ************ 2025-05-30 01:05:41.515154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.515161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.515210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.515220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.515233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.515240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.515247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.515255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.515265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.515333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.515349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.515357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.515364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.515375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.515637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.515651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.516128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.516145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.516153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.516215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.516274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.516320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.516328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.516335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.516343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.516350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.516619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.516633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.516687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.516696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.516704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.517184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.517261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.517272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.517302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.517315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.517327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.517344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.517408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.517425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.517432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.517439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.517488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.517503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.517830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.517857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.517865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.517872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.517879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.517954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.517965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.517972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.517980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.517987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.517994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.518087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.518099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.518114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.518122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.518187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.518223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.518383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.518391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.518405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.518430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.518472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.518490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.518497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.518521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.518578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.518585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.518606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.518617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518654 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.518662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.518675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.518687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.518730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.518737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.518750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.518773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.518807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.518822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.518833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.518840 | orchestrator | 2025-05-30 01:05:41.518847 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-30 01:05:41.519441 | orchestrator | Friday 30 May 2025 01:02:28 +0000 (0:00:07.360) 0:01:21.909 ************ 2025-05-30 01:05:41.519463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.519530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.519568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.519615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.519623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.519641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.519655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.519693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.519710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.519721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519728 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.519735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.519771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.519803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.519819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.519852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.519860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.519887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.520554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.520581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.520588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.520594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.520620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.520646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.520652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.520674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.520719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.520726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520731 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.520737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.520743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.520761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.520771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.520783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.520817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.520828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.520837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.520856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.520868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.520874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.520892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.520904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520910 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.520915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.520921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.520956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.520968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.520974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.520982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.520994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.521006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.521012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.521026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.521039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.521050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.521083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.521094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.521100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.521122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.521136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.521143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.521156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.521173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521180 | orchestrator | 2025-05-30 01:05:41.521186 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-30 01:05:41.521193 | orchestrator | Friday 30 May 2025 01:02:32 +0000 (0:00:04.070) 0:01:25.979 ************ 2025-05-30 01:05:41.521199 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.521205 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:05:41.521212 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.521218 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.521224 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:05:41.521230 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:05:41.521236 | orchestrator | 2025-05-30 01:05:41.521242 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-30 01:05:41.521249 | orchestrator | Friday 30 May 2025 01:02:37 +0000 (0:00:05.325) 0:01:31.305 ************ 2025-05-30 01:05:41.521255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.521262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.521326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.521346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.521355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.521379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.521393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.521399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.521419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.521431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521438 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.521444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.521451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.521488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.521499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.521505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.521517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.521525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.522351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.522369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.522426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.522454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522467 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.522509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.522522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.522603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.522641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.522652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.522711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.522748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.522761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.522792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.522804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522816 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.522838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.522851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.522869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.522962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.522986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.523008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.523020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.523032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.523072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.523088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.523119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.523139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.523151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.523197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.523209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.523227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.523262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.523277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.523335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.523347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.523375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.523435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.523463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.523481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.523511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.523534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.523545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.523587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.523599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.523610 | orchestrator | 2025-05-30 01:05:41.523622 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-30 01:05:41.523657 | orchestrator | Friday 30 May 2025 01:02:41 +0000 (0:00:03.587) 0:01:34.892 ************ 2025-05-30 01:05:41.523668 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.523680 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.523691 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.523702 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.523713 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.523723 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.523734 | orchestrator | 2025-05-30 01:05:41.523746 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-30 01:05:41.523757 | orchestrator | Friday 30 May 2025 01:02:43 +0000 (0:00:02.208) 0:01:37.101 ************ 2025-05-30 01:05:41.523767 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.523778 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.523789 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.523800 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.523811 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.523822 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.523832 | orchestrator | 2025-05-30 01:05:41.523843 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-30 01:05:41.523854 | orchestrator | Friday 30 May 2025 01:02:45 +0000 (0:00:02.027) 0:01:39.128 ************ 2025-05-30 01:05:41.523865 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.523876 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.523886 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.523898 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.523909 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.523929 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.523940 | orchestrator | 2025-05-30 01:05:41.523951 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-30 01:05:41.523962 | orchestrator | Friday 30 May 2025 01:02:47 +0000 (0:00:02.339) 0:01:41.467 ************ 2025-05-30 01:05:41.523973 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.523984 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.523994 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.524005 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.524016 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.524027 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.524037 | orchestrator | 2025-05-30 01:05:41.524048 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-30 01:05:41.524059 | orchestrator | Friday 30 May 2025 01:02:50 +0000 (0:00:02.565) 0:01:44.033 ************ 2025-05-30 01:05:41.524070 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.524086 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.524097 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.524108 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.524119 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.524129 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.524140 | orchestrator | 2025-05-30 01:05:41.524151 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-30 01:05:41.524162 | orchestrator | Friday 30 May 2025 01:02:52 +0000 (0:00:02.607) 0:01:46.640 ************ 2025-05-30 01:05:41.524173 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.524184 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.524195 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.524206 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.524222 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.524234 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.524245 | orchestrator | 2025-05-30 01:05:41.524256 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-30 01:05:41.524267 | orchestrator | Friday 30 May 2025 01:02:56 +0000 (0:00:03.182) 0:01:49.823 ************ 2025-05-30 01:05:41.524278 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-30 01:05:41.524319 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.524330 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-30 01:05:41.524341 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.524352 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-30 01:05:41.524363 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.524374 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-30 01:05:41.524385 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.524396 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-30 01:05:41.524407 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.524417 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-30 01:05:41.524428 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.524439 | orchestrator | 2025-05-30 01:05:41.524450 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-30 01:05:41.524461 | orchestrator | Friday 30 May 2025 01:02:57 +0000 (0:00:01.814) 0:01:51.637 ************ 2025-05-30 01:05:41.524473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.524493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.524509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.524527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.524540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.524551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.524570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.524581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.524593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.524614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.524626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.524638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.524649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.524667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.524679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.524696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.524714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.524725 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.524737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.524755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.524766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.524782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.525055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.525088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.525100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.525202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.525240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.525251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.525343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.525432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.525469 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.525481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.525608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.525642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.525653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.525681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.525782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.525793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.525817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.525835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525847 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.525926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.525950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.525991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.526146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.526190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.526202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.526225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.526359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.526377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.526401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.526413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526424 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.526504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.526529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.526564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.526681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.526743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.526862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.526890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.526927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.526946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.526985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.527135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.527146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.527158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.527278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.527317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.527340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.527352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527372 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.527384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.527470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.527500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.527512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527523 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.527534 | orchestrator | 2025-05-30 01:05:41.527546 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-30 01:05:41.527565 | orchestrator | Friday 30 May 2025 01:02:59 +0000 (0:00:01.883) 0:01:53.520 ************ 2025-05-30 01:05:41.527577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.527666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.527725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.527820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.527837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.527861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.527891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.527908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.527993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.528010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.528022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528041 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.528052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.528069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.528200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.528229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.528358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.528390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.528425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.528437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.528539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.528551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528570 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.528582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.528594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.528722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.528746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.528762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.528851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.528879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.528889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.528905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.528974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.528988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529006 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.529016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.529027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.529142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.529163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.529173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.529259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.529309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.529320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.529411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.529426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529443 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.529453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.529463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.529580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.529601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.529611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.529695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.529726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.529736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.529763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.529831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529851 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.529862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.529872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.529951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.529972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.529982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.529992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.530006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.530081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.530096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.530106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.530116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.530127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.530142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.530157 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.530167 | orchestrator | 2025-05-30 01:05:41.530178 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-30 01:05:41.530212 | orchestrator | Friday 30 May 2025 01:03:02 +0000 (0:00:03.078) 0:01:56.599 ************ 2025-05-30 01:05:41.530224 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.530234 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.530244 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.530253 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.530263 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.530272 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.530299 | orchestrator | 2025-05-30 01:05:41.530310 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-30 01:05:41.530320 | orchestrator | Friday 30 May 2025 01:03:05 +0000 (0:00:03.024) 0:01:59.624 ************ 2025-05-30 01:05:41.530330 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.530339 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.530349 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.530358 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:05:41.530368 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:05:41.530377 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:05:41.530387 | orchestrator | 2025-05-30 01:05:41.530396 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-30 01:05:41.530406 | orchestrator | Friday 30 May 2025 01:03:10 +0000 (0:00:04.378) 0:02:04.002 ************ 2025-05-30 01:05:41.530416 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.530425 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.530435 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.530444 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.530454 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.530466 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.530477 | orchestrator | 2025-05-30 01:05:41.530488 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-30 01:05:41.530500 | orchestrator | Friday 30 May 2025 01:03:12 +0000 (0:00:02.095) 0:02:06.098 ************ 2025-05-30 01:05:41.530511 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.530522 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.530534 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.530545 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.530556 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.530567 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.530578 | orchestrator | 2025-05-30 01:05:41.530589 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-30 01:05:41.530600 | orchestrator | Friday 30 May 2025 01:03:15 +0000 (0:00:02.768) 0:02:08.866 ************ 2025-05-30 01:05:41.530611 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.530623 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.530634 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.530645 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.530656 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.530667 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.530678 | orchestrator | 2025-05-30 01:05:41.530689 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-30 01:05:41.530701 | orchestrator | Friday 30 May 2025 01:03:18 +0000 (0:00:02.987) 0:02:11.854 ************ 2025-05-30 01:05:41.530712 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.530730 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.530741 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.530752 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.530763 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.530774 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.530785 | orchestrator | 2025-05-30 01:05:41.530797 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-30 01:05:41.530808 | orchestrator | Friday 30 May 2025 01:03:20 +0000 (0:00:02.082) 0:02:13.936 ************ 2025-05-30 01:05:41.530817 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.530827 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.530837 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.530846 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.530855 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.530865 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.530875 | orchestrator | 2025-05-30 01:05:41.530884 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-30 01:05:41.530894 | orchestrator | Friday 30 May 2025 01:03:21 +0000 (0:00:01.783) 0:02:15.720 ************ 2025-05-30 01:05:41.530903 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.530913 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.530922 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.530932 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.530941 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.530951 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.530960 | orchestrator | 2025-05-30 01:05:41.530970 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-30 01:05:41.530980 | orchestrator | Friday 30 May 2025 01:03:25 +0000 (0:00:03.748) 0:02:19.468 ************ 2025-05-30 01:05:41.530989 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.530999 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.531008 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.531018 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.531027 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.531037 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.531046 | orchestrator | 2025-05-30 01:05:41.531056 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-30 01:05:41.531066 | orchestrator | Friday 30 May 2025 01:03:28 +0000 (0:00:03.023) 0:02:22.492 ************ 2025-05-30 01:05:41.531084 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.531094 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.531103 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.531113 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.531122 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.531132 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.531142 | orchestrator | 2025-05-30 01:05:41.531151 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-30 01:05:41.531161 | orchestrator | Friday 30 May 2025 01:03:31 +0000 (0:00:02.290) 0:02:24.783 ************ 2025-05-30 01:05:41.531171 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-30 01:05:41.531209 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.531220 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-30 01:05:41.531230 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.531240 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-30 01:05:41.531250 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.531260 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-30 01:05:41.531269 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.531324 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-30 01:05:41.531342 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.531352 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-30 01:05:41.531362 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.531372 | orchestrator | 2025-05-30 01:05:41.531381 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-30 01:05:41.531391 | orchestrator | Friday 30 May 2025 01:03:33 +0000 (0:00:01.999) 0:02:26.783 ************ 2025-05-30 01:05:41.531401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.531412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.531497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.531518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.531528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.531579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.531609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.531619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.531644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.531698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.531733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.531762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.531793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.531802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.531822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531835 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.531866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.531876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.531884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.531901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.531912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531926 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.531956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.531966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.531996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.532033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.532051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.532059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.532075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.532123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.532132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.532149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.532157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532173 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.532203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.532213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.532275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.532311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.532319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.532336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.532384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.532394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.532412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.532425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.532476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.532498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.532548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.532557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.532574 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.532582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.532608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.532637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.532655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.532671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532680 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.532688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.532721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.532761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.532802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.532811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.532828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.532850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.532862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.532901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.532914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532923 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.532931 | orchestrator | 2025-05-30 01:05:41.532939 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-30 01:05:41.532947 | orchestrator | Friday 30 May 2025 01:03:36 +0000 (0:00:03.323) 0:02:30.106 ************ 2025-05-30 01:05:41.532955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.532968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.532998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.533030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.533103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.533164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-30 01:05:41.533177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.533257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.533333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.533379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.533429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.533443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.533459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.533505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.533522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.533559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.533604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.533630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533638 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.533646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.533666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.533692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.533700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.533748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.533757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-30 01:05:41.533773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-30 01:05:41.533815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.533962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.533979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.533987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.533996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.534038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.534055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.534064 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.534072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.534080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.534089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-30 01:05:41.534107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.534120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.534129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.534137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.534146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:05:41.534160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.534172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:05:41.534185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.534194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.534202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-30 01:05:41.534211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-30 01:05:41.534224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-30 01:05:41.534233 | orchestrator | 2025-05-30 01:05:41.534241 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-30 01:05:41.534253 | orchestrator | Friday 30 May 2025 01:03:39 +0000 (0:00:03.112) 0:02:33.219 ************ 2025-05-30 01:05:41.534261 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:05:41.534270 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:05:41.534278 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:05:41.534331 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:05:41.534339 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:05:41.534347 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:05:41.534355 | orchestrator | 2025-05-30 01:05:41.534363 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-30 01:05:41.534371 | orchestrator | Friday 30 May 2025 01:03:40 +0000 (0:00:00.609) 0:02:33.829 ************ 2025-05-30 01:05:41.534379 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:05:41.534387 | orchestrator | 2025-05-30 01:05:41.534399 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-30 01:05:41.534407 | orchestrator | Friday 30 May 2025 01:03:42 +0000 (0:00:02.324) 0:02:36.154 ************ 2025-05-30 01:05:41.534416 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:05:41.534423 | orchestrator | 2025-05-30 01:05:41.534431 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-30 01:05:41.534439 | orchestrator | Friday 30 May 2025 01:03:44 +0000 (0:00:02.206) 0:02:38.360 ************ 2025-05-30 01:05:41.534447 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:05:41.534455 | orchestrator | 2025-05-30 01:05:41.534463 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-30 01:05:41.534471 | orchestrator | Friday 30 May 2025 01:04:23 +0000 (0:00:39.031) 0:03:17.392 ************ 2025-05-30 01:05:41.534481 | orchestrator | 2025-05-30 01:05:41.534493 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-30 01:05:41.534506 | orchestrator | Friday 30 May 2025 01:04:23 +0000 (0:00:00.058) 0:03:17.451 ************ 2025-05-30 01:05:41.534515 | orchestrator | 2025-05-30 01:05:41.534523 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-30 01:05:41.534531 | orchestrator | Friday 30 May 2025 01:04:24 +0000 (0:00:00.310) 0:03:17.761 ************ 2025-05-30 01:05:41.534539 | orchestrator | 2025-05-30 01:05:41.534547 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-30 01:05:41.534555 | orchestrator | Friday 30 May 2025 01:04:24 +0000 (0:00:00.055) 0:03:17.817 ************ 2025-05-30 01:05:41.534562 | orchestrator | 2025-05-30 01:05:41.534570 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-30 01:05:41.534578 | orchestrator | Friday 30 May 2025 01:04:24 +0000 (0:00:00.054) 0:03:17.871 ************ 2025-05-30 01:05:41.534586 | orchestrator | 2025-05-30 01:05:41.534594 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-30 01:05:41.534608 | orchestrator | Friday 30 May 2025 01:04:24 +0000 (0:00:00.056) 0:03:17.928 ************ 2025-05-30 01:05:41.534616 | orchestrator | 2025-05-30 01:05:41.534624 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-30 01:05:41.534631 | orchestrator | Friday 30 May 2025 01:04:24 +0000 (0:00:00.348) 0:03:18.276 ************ 2025-05-30 01:05:41.534639 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:05:41.534647 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:05:41.534655 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:05:41.534663 | orchestrator | 2025-05-30 01:05:41.534671 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-30 01:05:41.534678 | orchestrator | Friday 30 May 2025 01:04:50 +0000 (0:00:26.411) 0:03:44.688 ************ 2025-05-30 01:05:41.534685 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:05:41.534691 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:05:41.534698 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:05:41.534705 | orchestrator | 2025-05-30 01:05:41.534712 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:05:41.534719 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-30 01:05:41.534727 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-30 01:05:41.534733 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-30 01:05:41.534740 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-30 01:05:41.534747 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-30 01:05:41.534754 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-30 01:05:41.534760 | orchestrator | 2025-05-30 01:05:41.534767 | orchestrator | 2025-05-30 01:05:41.534774 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:05:41.534781 | orchestrator | Friday 30 May 2025 01:05:40 +0000 (0:00:49.309) 0:04:33.997 ************ 2025-05-30 01:05:41.534787 | orchestrator | =============================================================================== 2025-05-30 01:05:41.534794 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 49.31s 2025-05-30 01:05:41.534801 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.03s 2025-05-30 01:05:41.534807 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.41s 2025-05-30 01:05:41.534814 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.91s 2025-05-30 01:05:41.534820 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.36s 2025-05-30 01:05:41.534832 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.09s 2025-05-30 01:05:41.534838 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.33s 2025-05-30 01:05:41.534845 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.12s 2025-05-30 01:05:41.534852 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.52s 2025-05-30 01:05:41.534858 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.38s 2025-05-30 01:05:41.534865 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.27s 2025-05-30 01:05:41.534875 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.20s 2025-05-30 01:05:41.534882 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 4.07s 2025-05-30 01:05:41.534894 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.85s 2025-05-30 01:05:41.534901 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.75s 2025-05-30 01:05:41.534908 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.72s 2025-05-30 01:05:41.534914 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.61s 2025-05-30 01:05:41.534921 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.59s 2025-05-30 01:05:41.534928 | orchestrator | Setting sysctl values --------------------------------------------------- 3.36s 2025-05-30 01:05:41.534934 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.32s 2025-05-30 01:05:41.534941 | orchestrator | 2025-05-30 01:05:41 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:41.534948 | orchestrator | 2025-05-30 01:05:41 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:41.534955 | orchestrator | 2025-05-30 01:05:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:44.556708 | orchestrator | 2025-05-30 01:05:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:44.556813 | orchestrator | 2025-05-30 01:05:44 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:44.557338 | orchestrator | 2025-05-30 01:05:44 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:44.558092 | orchestrator | 2025-05-30 01:05:44 | INFO  | Task 64f0aa77-ad2e-4a3f-80a6-d50e8c159546 is in state STARTED 2025-05-30 01:05:44.558225 | orchestrator | 2025-05-30 01:05:44 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:44.558513 | orchestrator | 2025-05-30 01:05:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:47.622417 | orchestrator | 2025-05-30 01:05:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:47.624984 | orchestrator | 2025-05-30 01:05:47 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:47.627238 | orchestrator | 2025-05-30 01:05:47 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:47.629019 | orchestrator | 2025-05-30 01:05:47 | INFO  | Task 64f0aa77-ad2e-4a3f-80a6-d50e8c159546 is in state STARTED 2025-05-30 01:05:47.630527 | orchestrator | 2025-05-30 01:05:47 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:47.630746 | orchestrator | 2025-05-30 01:05:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:50.686578 | orchestrator | 2025-05-30 01:05:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:50.686664 | orchestrator | 2025-05-30 01:05:50 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:50.687178 | orchestrator | 2025-05-30 01:05:50 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:50.688525 | orchestrator | 2025-05-30 01:05:50 | INFO  | Task 64f0aa77-ad2e-4a3f-80a6-d50e8c159546 is in state STARTED 2025-05-30 01:05:50.689123 | orchestrator | 2025-05-30 01:05:50 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:50.689161 | orchestrator | 2025-05-30 01:05:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:53.737369 | orchestrator | 2025-05-30 01:05:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:53.737515 | orchestrator | 2025-05-30 01:05:53 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:53.738668 | orchestrator | 2025-05-30 01:05:53 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:53.739486 | orchestrator | 2025-05-30 01:05:53 | INFO  | Task 64f0aa77-ad2e-4a3f-80a6-d50e8c159546 is in state STARTED 2025-05-30 01:05:53.740188 | orchestrator | 2025-05-30 01:05:53 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:53.740473 | orchestrator | 2025-05-30 01:05:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:56.779636 | orchestrator | 2025-05-30 01:05:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:56.779740 | orchestrator | 2025-05-30 01:05:56 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:56.780190 | orchestrator | 2025-05-30 01:05:56 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:56.780967 | orchestrator | 2025-05-30 01:05:56 | INFO  | Task 64f0aa77-ad2e-4a3f-80a6-d50e8c159546 is in state STARTED 2025-05-30 01:05:56.781814 | orchestrator | 2025-05-30 01:05:56 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:56.781839 | orchestrator | 2025-05-30 01:05:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:05:59.818677 | orchestrator | 2025-05-30 01:05:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:05:59.819839 | orchestrator | 2025-05-30 01:05:59 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:05:59.821587 | orchestrator | 2025-05-30 01:05:59 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:05:59.823405 | orchestrator | 2025-05-30 01:05:59 | INFO  | Task 64f0aa77-ad2e-4a3f-80a6-d50e8c159546 is in state STARTED 2025-05-30 01:05:59.824499 | orchestrator | 2025-05-30 01:05:59 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:05:59.824524 | orchestrator | 2025-05-30 01:05:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:02.876772 | orchestrator | 2025-05-30 01:06:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:02.878433 | orchestrator | 2025-05-30 01:06:02 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:06:02.882853 | orchestrator | 2025-05-30 01:06:02 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:02.885021 | orchestrator | 2025-05-30 01:06:02 | INFO  | Task 64f0aa77-ad2e-4a3f-80a6-d50e8c159546 is in state STARTED 2025-05-30 01:06:02.887698 | orchestrator | 2025-05-30 01:06:02 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:02.887746 | orchestrator | 2025-05-30 01:06:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:05.932716 | orchestrator | 2025-05-30 01:06:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:05.932920 | orchestrator | 2025-05-30 01:06:05 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state STARTED 2025-05-30 01:06:05.932952 | orchestrator | 2025-05-30 01:06:05 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:05.933668 | orchestrator | 2025-05-30 01:06:05 | INFO  | Task 64f0aa77-ad2e-4a3f-80a6-d50e8c159546 is in state STARTED 2025-05-30 01:06:05.937127 | orchestrator | 2025-05-30 01:06:05 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:05.937170 | orchestrator | 2025-05-30 01:06:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:08.973871 | orchestrator | 2025-05-30 01:06:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:08.977095 | orchestrator | 2025-05-30 01:06:08 | INFO  | Task ed15ac6b-9bb9-4a96-a736-e34be60963d8 is in state SUCCESS 2025-05-30 01:06:08.978825 | orchestrator | 2025-05-30 01:06:08.978861 | orchestrator | 2025-05-30 01:06:08.978872 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:06:08.978883 | orchestrator | 2025-05-30 01:06:08.978893 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:06:08.978903 | orchestrator | Friday 30 May 2025 01:04:09 +0000 (0:00:00.309) 0:00:00.309 ************ 2025-05-30 01:06:08.978913 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:06:08.978924 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:06:08.978937 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:06:08.978992 | orchestrator | 2025-05-30 01:06:08.979014 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:06:08.979064 | orchestrator | Friday 30 May 2025 01:04:10 +0000 (0:00:00.459) 0:00:00.768 ************ 2025-05-30 01:06:08.979083 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-30 01:06:08.979100 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-30 01:06:08.979223 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-30 01:06:08.979244 | orchestrator | 2025-05-30 01:06:08.979300 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-30 01:06:08.979496 | orchestrator | 2025-05-30 01:06:08.979507 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-30 01:06:08.979530 | orchestrator | Friday 30 May 2025 01:04:10 +0000 (0:00:00.314) 0:00:01.082 ************ 2025-05-30 01:06:08.979541 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:06:08.979551 | orchestrator | 2025-05-30 01:06:08.979561 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-30 01:06:08.979571 | orchestrator | Friday 30 May 2025 01:04:11 +0000 (0:00:00.723) 0:00:01.806 ************ 2025-05-30 01:06:08.979607 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-30 01:06:08.979618 | orchestrator | 2025-05-30 01:06:08.979628 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-30 01:06:08.979638 | orchestrator | Friday 30 May 2025 01:04:14 +0000 (0:00:03.569) 0:00:05.376 ************ 2025-05-30 01:06:08.979648 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-30 01:06:08.979658 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-30 01:06:08.979667 | orchestrator | 2025-05-30 01:06:08.979677 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-30 01:06:08.979687 | orchestrator | Friday 30 May 2025 01:04:21 +0000 (0:00:06.456) 0:00:11.832 ************ 2025-05-30 01:06:08.979697 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-30 01:06:08.979706 | orchestrator | 2025-05-30 01:06:08.979716 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-30 01:06:08.979726 | orchestrator | Friday 30 May 2025 01:04:24 +0000 (0:00:03.336) 0:00:15.169 ************ 2025-05-30 01:06:08.979736 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-30 01:06:08.979746 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-30 01:06:08.979755 | orchestrator | 2025-05-30 01:06:08.979765 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-30 01:06:08.979775 | orchestrator | Friday 30 May 2025 01:04:28 +0000 (0:00:03.997) 0:00:19.167 ************ 2025-05-30 01:06:08.979784 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-30 01:06:08.979794 | orchestrator | 2025-05-30 01:06:08.979804 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-30 01:06:08.979813 | orchestrator | Friday 30 May 2025 01:04:31 +0000 (0:00:03.296) 0:00:22.463 ************ 2025-05-30 01:06:08.979837 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-30 01:06:08.979847 | orchestrator | 2025-05-30 01:06:08.979857 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-30 01:06:08.979867 | orchestrator | Friday 30 May 2025 01:04:35 +0000 (0:00:04.156) 0:00:26.619 ************ 2025-05-30 01:06:08.979876 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:06:08.979886 | orchestrator | 2025-05-30 01:06:08.979896 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-30 01:06:08.979906 | orchestrator | Friday 30 May 2025 01:04:39 +0000 (0:00:03.208) 0:00:29.828 ************ 2025-05-30 01:06:08.979915 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:06:08.979925 | orchestrator | 2025-05-30 01:06:08.979935 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-30 01:06:08.979944 | orchestrator | Friday 30 May 2025 01:04:43 +0000 (0:00:04.161) 0:00:33.990 ************ 2025-05-30 01:06:08.979954 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:06:08.979964 | orchestrator | 2025-05-30 01:06:08.979974 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-30 01:06:08.979983 | orchestrator | Friday 30 May 2025 01:04:46 +0000 (0:00:03.650) 0:00:37.641 ************ 2025-05-30 01:06:08.980079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.980104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.980115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.980134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.980145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.980164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.980175 | orchestrator | 2025-05-30 01:06:08.980186 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-30 01:06:08.980196 | orchestrator | Friday 30 May 2025 01:04:48 +0000 (0:00:01.526) 0:00:39.167 ************ 2025-05-30 01:06:08.980206 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:06:08.980216 | orchestrator | 2025-05-30 01:06:08.980226 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-30 01:06:08.980236 | orchestrator | Friday 30 May 2025 01:04:48 +0000 (0:00:00.130) 0:00:39.298 ************ 2025-05-30 01:06:08.980278 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:06:08.980291 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:06:08.980301 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:06:08.980310 | orchestrator | 2025-05-30 01:06:08.980320 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-30 01:06:08.980330 | orchestrator | Friday 30 May 2025 01:04:49 +0000 (0:00:00.414) 0:00:39.712 ************ 2025-05-30 01:06:08.980339 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-30 01:06:08.980349 | orchestrator | 2025-05-30 01:06:08.980364 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-30 01:06:08.980374 | orchestrator | Friday 30 May 2025 01:04:49 +0000 (0:00:00.519) 0:00:40.232 ************ 2025-05-30 01:06:08.980384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 01:06:08.980402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:06:08.980412 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:06:08.980423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 01:06:08.980440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:06:08.980451 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:06:08.980465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 01:06:08.980482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:06:08.980492 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:06:08.980502 | orchestrator | 2025-05-30 01:06:08.980512 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-30 01:06:08.980522 | orchestrator | Friday 30 May 2025 01:04:50 +0000 (0:00:00.920) 0:00:41.152 ************ 2025-05-30 01:06:08.980532 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:06:08.980541 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:06:08.980551 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:06:08.980561 | orchestrator | 2025-05-30 01:06:08.980571 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-30 01:06:08.980581 | orchestrator | Friday 30 May 2025 01:04:51 +0000 (0:00:00.575) 0:00:41.728 ************ 2025-05-30 01:06:08.980591 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:06:08.980600 | orchestrator | 2025-05-30 01:06:08.980610 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-30 01:06:08.980619 | orchestrator | Friday 30 May 2025 01:04:52 +0000 (0:00:01.557) 0:00:43.285 ************ 2025-05-30 01:06:08.980630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.980647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.980662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.980679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.980690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.980700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.980710 | orchestrator | 2025-05-30 01:06:08.980722 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-30 01:06:08.980734 | orchestrator | Friday 30 May 2025 01:04:55 +0000 (0:00:03.019) 0:00:46.304 ************ 2025-05-30 01:06:08.980752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 01:06:08.980775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:06:08.980787 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:06:08.980800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 01:06:08.980812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:06:08.980824 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:06:08.980835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 01:06:08.980854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:06:08.980871 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:06:08.980883 | orchestrator | 2025-05-30 01:06:08.980894 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-30 01:06:08.980906 | orchestrator | Friday 30 May 2025 01:04:56 +0000 (0:00:01.034) 0:00:47.338 ************ 2025-05-30 01:06:08.980927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 01:06:08.980940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:06:08.980951 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:06:08.980963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 01:06:08.980979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:06:08.980998 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:06:08.981015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 01:06:08.981027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:06:08.981039 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:06:08.981051 | orchestrator | 2025-05-30 01:06:08.981063 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-30 01:06:08.981074 | orchestrator | Friday 30 May 2025 01:04:58 +0000 (0:00:02.158) 0:00:49.496 ************ 2025-05-30 01:06:08.981084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.981094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.981116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.981131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.981142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.981152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.981163 | orchestrator | 2025-05-30 01:06:08.981173 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-30 01:06:08.981183 | orchestrator | Friday 30 May 2025 01:05:01 +0000 (0:00:02.933) 0:00:52.430 ************ 2025-05-30 01:06:08.981193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.981214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.981230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.981240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.981266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.981277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.981305 | orchestrator | 2025-05-30 01:06:08.981315 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-30 01:06:08.981329 | orchestrator | Friday 30 May 2025 01:05:08 +0000 (0:00:07.106) 0:00:59.537 ************ 2025-05-30 01:06:08.981344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 01:06:08.981355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:06:08.981365 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:06:08.981375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 01:06:08.981385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:06:08.981401 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:06:08.981417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-30 01:06:08.981432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:06:08.981442 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:06:08.981452 | orchestrator | 2025-05-30 01:06:08.981461 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-30 01:06:08.981471 | orchestrator | Friday 30 May 2025 01:05:09 +0000 (0:00:00.760) 0:01:00.298 ************ 2025-05-30 01:06:08.981481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.981492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.981509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-30 01:06:08.981524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.981539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.981550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:06:08.981560 | orchestrator | 2025-05-30 01:06:08.981570 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-30 01:06:08.981579 | orchestrator | Friday 30 May 2025 01:05:12 +0000 (0:00:02.980) 0:01:03.278 ************ 2025-05-30 01:06:08.981589 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:06:08.981599 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:06:08.981609 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:06:08.981618 | orchestrator | 2025-05-30 01:06:08.981628 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-30 01:06:08.981638 | orchestrator | Friday 30 May 2025 01:05:12 +0000 (0:00:00.270) 0:01:03.549 ************ 2025-05-30 01:06:08.981757 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:06:08.981790 | orchestrator | 2025-05-30 01:06:08.981806 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-30 01:06:08.981816 | orchestrator | Friday 30 May 2025 01:05:15 +0000 (0:00:02.416) 0:01:05.965 ************ 2025-05-30 01:06:08.981826 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:06:08.981835 | orchestrator | 2025-05-30 01:06:08.981845 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-30 01:06:08.981855 | orchestrator | Friday 30 May 2025 01:05:17 +0000 (0:00:02.287) 0:01:08.253 ************ 2025-05-30 01:06:08.981865 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:06:08.981874 | orchestrator | 2025-05-30 01:06:08.981884 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-30 01:06:08.981894 | orchestrator | Friday 30 May 2025 01:05:35 +0000 (0:00:17.993) 0:01:26.247 ************ 2025-05-30 01:06:08.981903 | orchestrator | 2025-05-30 01:06:08.981913 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-30 01:06:08.981922 | orchestrator | Friday 30 May 2025 01:05:35 +0000 (0:00:00.070) 0:01:26.317 ************ 2025-05-30 01:06:08.981932 | orchestrator | 2025-05-30 01:06:08.981942 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-30 01:06:08.981951 | orchestrator | Friday 30 May 2025 01:05:35 +0000 (0:00:00.185) 0:01:26.503 ************ 2025-05-30 01:06:08.981961 | orchestrator | 2025-05-30 01:06:08.981971 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-30 01:06:08.981981 | orchestrator | Friday 30 May 2025 01:05:35 +0000 (0:00:00.065) 0:01:26.568 ************ 2025-05-30 01:06:08.981990 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:06:08.982000 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:06:08.982009 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:06:08.982053 | orchestrator | 2025-05-30 01:06:08.982063 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-30 01:06:08.982073 | orchestrator | Friday 30 May 2025 01:05:53 +0000 (0:00:17.488) 0:01:44.056 ************ 2025-05-30 01:06:08.982083 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:06:08.982093 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:06:08.982103 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:06:08.982113 | orchestrator | 2025-05-30 01:06:08.982123 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:06:08.982141 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-30 01:06:08.982173 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-30 01:06:08.982184 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-30 01:06:08.982194 | orchestrator | 2025-05-30 01:06:08.982203 | orchestrator | 2025-05-30 01:06:08.982213 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:06:08.982223 | orchestrator | Friday 30 May 2025 01:06:06 +0000 (0:00:13.020) 0:01:57.077 ************ 2025-05-30 01:06:08.982232 | orchestrator | =============================================================================== 2025-05-30 01:06:08.982242 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.99s 2025-05-30 01:06:08.982325 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.49s 2025-05-30 01:06:08.982344 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 13.02s 2025-05-30 01:06:08.982355 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.11s 2025-05-30 01:06:08.982366 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.46s 2025-05-30 01:06:08.982380 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.16s 2025-05-30 01:06:08.982392 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.16s 2025-05-30 01:06:08.982418 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.00s 2025-05-30 01:06:08.982432 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.65s 2025-05-30 01:06:08.982444 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.57s 2025-05-30 01:06:08.982456 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.34s 2025-05-30 01:06:08.982468 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.30s 2025-05-30 01:06:08.982481 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.21s 2025-05-30 01:06:08.982493 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.02s 2025-05-30 01:06:08.982507 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.98s 2025-05-30 01:06:08.982519 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.93s 2025-05-30 01:06:08.982532 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.42s 2025-05-30 01:06:08.982544 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.29s 2025-05-30 01:06:08.982557 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.16s 2025-05-30 01:06:08.982569 | orchestrator | magnum : include_tasks -------------------------------------------------- 1.56s 2025-05-30 01:06:08.982582 | orchestrator | 2025-05-30 01:06:08 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:08.982596 | orchestrator | 2025-05-30 01:06:08 | INFO  | Task 64f0aa77-ad2e-4a3f-80a6-d50e8c159546 is in state STARTED 2025-05-30 01:06:08.982609 | orchestrator | 2025-05-30 01:06:08 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:08.983377 | orchestrator | 2025-05-30 01:06:08 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:08.983400 | orchestrator | 2025-05-30 01:06:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:12.033341 | orchestrator | 2025-05-30 01:06:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:12.034371 | orchestrator | 2025-05-30 01:06:12 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:12.034429 | orchestrator | 2025-05-30 01:06:12 | INFO  | Task 64f0aa77-ad2e-4a3f-80a6-d50e8c159546 is in state STARTED 2025-05-30 01:06:12.037159 | orchestrator | 2025-05-30 01:06:12 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:12.038436 | orchestrator | 2025-05-30 01:06:12 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:12.038472 | orchestrator | 2025-05-30 01:06:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:15.085972 | orchestrator | 2025-05-30 01:06:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:15.088788 | orchestrator | 2025-05-30 01:06:15 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:15.091539 | orchestrator | 2025-05-30 01:06:15 | INFO  | Task 64f0aa77-ad2e-4a3f-80a6-d50e8c159546 is in state SUCCESS 2025-05-30 01:06:15.093179 | orchestrator | 2025-05-30 01:06:15 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:15.095223 | orchestrator | 2025-05-30 01:06:15 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:15.095365 | orchestrator | 2025-05-30 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:18.134101 | orchestrator | 2025-05-30 01:06:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:18.134530 | orchestrator | 2025-05-30 01:06:18 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:18.134922 | orchestrator | 2025-05-30 01:06:18 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:18.135885 | orchestrator | 2025-05-30 01:06:18 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:18.136532 | orchestrator | 2025-05-30 01:06:18 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:18.136735 | orchestrator | 2025-05-30 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:21.168613 | orchestrator | 2025-05-30 01:06:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:21.168844 | orchestrator | 2025-05-30 01:06:21 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:21.169940 | orchestrator | 2025-05-30 01:06:21 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:21.170205 | orchestrator | 2025-05-30 01:06:21 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:21.172130 | orchestrator | 2025-05-30 01:06:21 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:21.172166 | orchestrator | 2025-05-30 01:06:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:24.199159 | orchestrator | 2025-05-30 01:06:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:24.199809 | orchestrator | 2025-05-30 01:06:24 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:24.200770 | orchestrator | 2025-05-30 01:06:24 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:24.205852 | orchestrator | 2025-05-30 01:06:24 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:24.207444 | orchestrator | 2025-05-30 01:06:24 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:24.207479 | orchestrator | 2025-05-30 01:06:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:27.256121 | orchestrator | 2025-05-30 01:06:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:27.257558 | orchestrator | 2025-05-30 01:06:27 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:27.258907 | orchestrator | 2025-05-30 01:06:27 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:27.260000 | orchestrator | 2025-05-30 01:06:27 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:27.261837 | orchestrator | 2025-05-30 01:06:27 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:27.261864 | orchestrator | 2025-05-30 01:06:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:30.314269 | orchestrator | 2025-05-30 01:06:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:30.314369 | orchestrator | 2025-05-30 01:06:30 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:30.315336 | orchestrator | 2025-05-30 01:06:30 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:30.315728 | orchestrator | 2025-05-30 01:06:30 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:30.316676 | orchestrator | 2025-05-30 01:06:30 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:30.316698 | orchestrator | 2025-05-30 01:06:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:33.354415 | orchestrator | 2025-05-30 01:06:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:33.354541 | orchestrator | 2025-05-30 01:06:33 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:33.355006 | orchestrator | 2025-05-30 01:06:33 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:33.355689 | orchestrator | 2025-05-30 01:06:33 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:33.357809 | orchestrator | 2025-05-30 01:06:33 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:33.357839 | orchestrator | 2025-05-30 01:06:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:36.401468 | orchestrator | 2025-05-30 01:06:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:36.401602 | orchestrator | 2025-05-30 01:06:36 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:36.401619 | orchestrator | 2025-05-30 01:06:36 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:36.402091 | orchestrator | 2025-05-30 01:06:36 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:36.403170 | orchestrator | 2025-05-30 01:06:36 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:36.403210 | orchestrator | 2025-05-30 01:06:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:39.441769 | orchestrator | 2025-05-30 01:06:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:39.442692 | orchestrator | 2025-05-30 01:06:39 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:39.444295 | orchestrator | 2025-05-30 01:06:39 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:39.445776 | orchestrator | 2025-05-30 01:06:39 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:39.447518 | orchestrator | 2025-05-30 01:06:39 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:39.447614 | orchestrator | 2025-05-30 01:06:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:42.508350 | orchestrator | 2025-05-30 01:06:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:42.508474 | orchestrator | 2025-05-30 01:06:42 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:42.509038 | orchestrator | 2025-05-30 01:06:42 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:42.510894 | orchestrator | 2025-05-30 01:06:42 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:42.511596 | orchestrator | 2025-05-30 01:06:42 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:42.511623 | orchestrator | 2025-05-30 01:06:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:45.550413 | orchestrator | 2025-05-30 01:06:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:45.551586 | orchestrator | 2025-05-30 01:06:45 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:45.553987 | orchestrator | 2025-05-30 01:06:45 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:45.556829 | orchestrator | 2025-05-30 01:06:45 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:45.557420 | orchestrator | 2025-05-30 01:06:45 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:45.557445 | orchestrator | 2025-05-30 01:06:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:48.587388 | orchestrator | 2025-05-30 01:06:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:48.588235 | orchestrator | 2025-05-30 01:06:48 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:48.588636 | orchestrator | 2025-05-30 01:06:48 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:48.589762 | orchestrator | 2025-05-30 01:06:48 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:48.591116 | orchestrator | 2025-05-30 01:06:48 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:48.591192 | orchestrator | 2025-05-30 01:06:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:51.627032 | orchestrator | 2025-05-30 01:06:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:51.627699 | orchestrator | 2025-05-30 01:06:51 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:51.628519 | orchestrator | 2025-05-30 01:06:51 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:51.629296 | orchestrator | 2025-05-30 01:06:51 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:51.630636 | orchestrator | 2025-05-30 01:06:51 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:51.630685 | orchestrator | 2025-05-30 01:06:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:54.673543 | orchestrator | 2025-05-30 01:06:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:54.675791 | orchestrator | 2025-05-30 01:06:54 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:54.676998 | orchestrator | 2025-05-30 01:06:54 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:54.678146 | orchestrator | 2025-05-30 01:06:54 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:54.678902 | orchestrator | 2025-05-30 01:06:54 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:54.678930 | orchestrator | 2025-05-30 01:06:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:06:57.709327 | orchestrator | 2025-05-30 01:06:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:06:57.709789 | orchestrator | 2025-05-30 01:06:57 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:06:57.710813 | orchestrator | 2025-05-30 01:06:57 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:06:57.712951 | orchestrator | 2025-05-30 01:06:57 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:06:57.714470 | orchestrator | 2025-05-30 01:06:57 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:06:57.715844 | orchestrator | 2025-05-30 01:06:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:00.753575 | orchestrator | 2025-05-30 01:07:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:00.753696 | orchestrator | 2025-05-30 01:07:00 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:00.753735 | orchestrator | 2025-05-30 01:07:00 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:07:00.755162 | orchestrator | 2025-05-30 01:07:00 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:00.756119 | orchestrator | 2025-05-30 01:07:00 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:00.756151 | orchestrator | 2025-05-30 01:07:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:03.815842 | orchestrator | 2025-05-30 01:07:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:03.815957 | orchestrator | 2025-05-30 01:07:03 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:03.817655 | orchestrator | 2025-05-30 01:07:03 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:07:03.818361 | orchestrator | 2025-05-30 01:07:03 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:03.820083 | orchestrator | 2025-05-30 01:07:03 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:03.820108 | orchestrator | 2025-05-30 01:07:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:06.861161 | orchestrator | 2025-05-30 01:07:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:06.861324 | orchestrator | 2025-05-30 01:07:06 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:06.861880 | orchestrator | 2025-05-30 01:07:06 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:07:06.862308 | orchestrator | 2025-05-30 01:07:06 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:06.862937 | orchestrator | 2025-05-30 01:07:06 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:06.862970 | orchestrator | 2025-05-30 01:07:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:09.896778 | orchestrator | 2025-05-30 01:07:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:09.896865 | orchestrator | 2025-05-30 01:07:09 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:09.897151 | orchestrator | 2025-05-30 01:07:09 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:07:09.897768 | orchestrator | 2025-05-30 01:07:09 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:09.898246 | orchestrator | 2025-05-30 01:07:09 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:09.898269 | orchestrator | 2025-05-30 01:07:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:12.915258 | orchestrator | 2025-05-30 01:07:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:12.915479 | orchestrator | 2025-05-30 01:07:12 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:12.915898 | orchestrator | 2025-05-30 01:07:12 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:07:12.919071 | orchestrator | 2025-05-30 01:07:12 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:12.919535 | orchestrator | 2025-05-30 01:07:12 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:12.919559 | orchestrator | 2025-05-30 01:07:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:15.942370 | orchestrator | 2025-05-30 01:07:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:15.942497 | orchestrator | 2025-05-30 01:07:15 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:15.942735 | orchestrator | 2025-05-30 01:07:15 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:07:15.943357 | orchestrator | 2025-05-30 01:07:15 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:15.943989 | orchestrator | 2025-05-30 01:07:15 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:15.944011 | orchestrator | 2025-05-30 01:07:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:18.986273 | orchestrator | 2025-05-30 01:07:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:18.986464 | orchestrator | 2025-05-30 01:07:18 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:18.986485 | orchestrator | 2025-05-30 01:07:18 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:07:18.986497 | orchestrator | 2025-05-30 01:07:18 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:18.986508 | orchestrator | 2025-05-30 01:07:18 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:18.986520 | orchestrator | 2025-05-30 01:07:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:22.011451 | orchestrator | 2025-05-30 01:07:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:22.011846 | orchestrator | 2025-05-30 01:07:22 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:22.012573 | orchestrator | 2025-05-30 01:07:22 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:07:22.014503 | orchestrator | 2025-05-30 01:07:22 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:22.015871 | orchestrator | 2025-05-30 01:07:22 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:22.015900 | orchestrator | 2025-05-30 01:07:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:25.049766 | orchestrator | 2025-05-30 01:07:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:25.050067 | orchestrator | 2025-05-30 01:07:25 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:25.051072 | orchestrator | 2025-05-30 01:07:25 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state STARTED 2025-05-30 01:07:25.051639 | orchestrator | 2025-05-30 01:07:25 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:25.052846 | orchestrator | 2025-05-30 01:07:25 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:25.052868 | orchestrator | 2025-05-30 01:07:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:28.085963 | orchestrator | 2025-05-30 01:07:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:28.087128 | orchestrator | 2025-05-30 01:07:28 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:28.087534 | orchestrator | 2025-05-30 01:07:28 | INFO  | Task 70c10b08-43bd-4a43-bf8c-6c108b4856a4 is in state SUCCESS 2025-05-30 01:07:28.087852 | orchestrator | 2025-05-30 01:07:28.087876 | orchestrator | 2025-05-30 01:07:28.087888 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:07:28.087900 | orchestrator | 2025-05-30 01:07:28.087935 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:07:28.087947 | orchestrator | Friday 30 May 2025 01:05:42 +0000 (0:00:00.235) 0:00:00.235 ************ 2025-05-30 01:07:28.087959 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:07:28.087970 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:07:28.087981 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:07:28.088079 | orchestrator | ok: [testbed-manager] 2025-05-30 01:07:28.088092 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:07:28.088103 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:07:28.088114 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:07:28.088124 | orchestrator | 2025-05-30 01:07:28.088136 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:07:28.088147 | orchestrator | Friday 30 May 2025 01:05:43 +0000 (0:00:00.724) 0:00:00.960 ************ 2025-05-30 01:07:28.088190 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-30 01:07:28.088214 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-30 01:07:28.088226 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-30 01:07:28.088237 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-30 01:07:28.088248 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-30 01:07:28.088259 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-30 01:07:28.088270 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-30 01:07:28.088281 | orchestrator | 2025-05-30 01:07:28.088292 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-30 01:07:28.088303 | orchestrator | 2025-05-30 01:07:28.088314 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-30 01:07:28.088325 | orchestrator | Friday 30 May 2025 01:05:44 +0000 (0:00:00.849) 0:00:01.809 ************ 2025-05-30 01:07:28.088336 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 01:07:28.088349 | orchestrator | 2025-05-30 01:07:28.088360 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-30 01:07:28.088371 | orchestrator | Friday 30 May 2025 01:05:45 +0000 (0:00:01.484) 0:00:03.293 ************ 2025-05-30 01:07:28.088382 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-05-30 01:07:28.088393 | orchestrator | 2025-05-30 01:07:28.088404 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-30 01:07:28.088415 | orchestrator | Friday 30 May 2025 01:05:49 +0000 (0:00:03.457) 0:00:06.751 ************ 2025-05-30 01:07:28.088467 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-30 01:07:28.088481 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-30 01:07:28.088492 | orchestrator | 2025-05-30 01:07:28.088503 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-30 01:07:28.088514 | orchestrator | Friday 30 May 2025 01:05:55 +0000 (0:00:06.172) 0:00:12.924 ************ 2025-05-30 01:07:28.088525 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-30 01:07:28.088537 | orchestrator | 2025-05-30 01:07:28.088548 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-30 01:07:28.088558 | orchestrator | Friday 30 May 2025 01:05:58 +0000 (0:00:03.231) 0:00:16.155 ************ 2025-05-30 01:07:28.088652 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-30 01:07:28.088667 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-05-30 01:07:28.088678 | orchestrator | 2025-05-30 01:07:28.088689 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-30 01:07:28.088700 | orchestrator | Friday 30 May 2025 01:06:02 +0000 (0:00:03.637) 0:00:19.793 ************ 2025-05-30 01:07:28.088788 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-30 01:07:28.088804 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-05-30 01:07:28.088815 | orchestrator | 2025-05-30 01:07:28.088826 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-30 01:07:28.088837 | orchestrator | Friday 30 May 2025 01:06:08 +0000 (0:00:06.133) 0:00:25.927 ************ 2025-05-30 01:07:28.088848 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-05-30 01:07:28.088859 | orchestrator | 2025-05-30 01:07:28.088870 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:07:28.088881 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:07:28.088892 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:07:28.088904 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:07:28.088915 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:07:28.088926 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:07:28.088950 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:07:28.088962 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:07:28.088973 | orchestrator | 2025-05-30 01:07:28.088985 | orchestrator | 2025-05-30 01:07:28.088996 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:07:28.089007 | orchestrator | Friday 30 May 2025 01:06:14 +0000 (0:00:05.748) 0:00:31.675 ************ 2025-05-30 01:07:28.089018 | orchestrator | =============================================================================== 2025-05-30 01:07:28.089029 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.17s 2025-05-30 01:07:28.089040 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.13s 2025-05-30 01:07:28.089051 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.75s 2025-05-30 01:07:28.089067 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.64s 2025-05-30 01:07:28.089078 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.46s 2025-05-30 01:07:28.089090 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.23s 2025-05-30 01:07:28.089100 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.48s 2025-05-30 01:07:28.089111 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2025-05-30 01:07:28.089122 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2025-05-30 01:07:28.089133 | orchestrator | 2025-05-30 01:07:28.089144 | orchestrator | 2025-05-30 01:07:28 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:28.089155 | orchestrator | 2025-05-30 01:07:28 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:28.089240 | orchestrator | 2025-05-30 01:07:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:31.125298 | orchestrator | 2025-05-30 01:07:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:31.125411 | orchestrator | 2025-05-30 01:07:31 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:07:31.125651 | orchestrator | 2025-05-30 01:07:31 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:31.126126 | orchestrator | 2025-05-30 01:07:31 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:31.126625 | orchestrator | 2025-05-30 01:07:31 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:31.126649 | orchestrator | 2025-05-30 01:07:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:34.157625 | orchestrator | 2025-05-30 01:07:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:34.157835 | orchestrator | 2025-05-30 01:07:34 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:07:34.158336 | orchestrator | 2025-05-30 01:07:34 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:34.158734 | orchestrator | 2025-05-30 01:07:34 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:34.159294 | orchestrator | 2025-05-30 01:07:34 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:34.159317 | orchestrator | 2025-05-30 01:07:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:37.182701 | orchestrator | 2025-05-30 01:07:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:37.182824 | orchestrator | 2025-05-30 01:07:37 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:07:37.183223 | orchestrator | 2025-05-30 01:07:37 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:37.183769 | orchestrator | 2025-05-30 01:07:37 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:37.184293 | orchestrator | 2025-05-30 01:07:37 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:37.185434 | orchestrator | 2025-05-30 01:07:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:40.203935 | orchestrator | 2025-05-30 01:07:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:40.204045 | orchestrator | 2025-05-30 01:07:40 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:07:40.204266 | orchestrator | 2025-05-30 01:07:40 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:40.204644 | orchestrator | 2025-05-30 01:07:40 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:40.205099 | orchestrator | 2025-05-30 01:07:40 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:40.205119 | orchestrator | 2025-05-30 01:07:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:43.228731 | orchestrator | 2025-05-30 01:07:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:43.228926 | orchestrator | 2025-05-30 01:07:43 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:07:43.229592 | orchestrator | 2025-05-30 01:07:43 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:43.231324 | orchestrator | 2025-05-30 01:07:43 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:43.232010 | orchestrator | 2025-05-30 01:07:43 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:43.232035 | orchestrator | 2025-05-30 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:46.281549 | orchestrator | 2025-05-30 01:07:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:46.282662 | orchestrator | 2025-05-30 01:07:46 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:07:46.282719 | orchestrator | 2025-05-30 01:07:46 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:46.284084 | orchestrator | 2025-05-30 01:07:46 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:46.285229 | orchestrator | 2025-05-30 01:07:46 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:46.285257 | orchestrator | 2025-05-30 01:07:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:49.316921 | orchestrator | 2025-05-30 01:07:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:49.317009 | orchestrator | 2025-05-30 01:07:49 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:07:49.317275 | orchestrator | 2025-05-30 01:07:49 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:49.317997 | orchestrator | 2025-05-30 01:07:49 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:49.318493 | orchestrator | 2025-05-30 01:07:49 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:49.318513 | orchestrator | 2025-05-30 01:07:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:52.349914 | orchestrator | 2025-05-30 01:07:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:52.352659 | orchestrator | 2025-05-30 01:07:52 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:07:52.354482 | orchestrator | 2025-05-30 01:07:52 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:52.354929 | orchestrator | 2025-05-30 01:07:52 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:52.355510 | orchestrator | 2025-05-30 01:07:52 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:52.355538 | orchestrator | 2025-05-30 01:07:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:55.387332 | orchestrator | 2025-05-30 01:07:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:55.387477 | orchestrator | 2025-05-30 01:07:55 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:07:55.388416 | orchestrator | 2025-05-30 01:07:55 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:55.388679 | orchestrator | 2025-05-30 01:07:55 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:55.389407 | orchestrator | 2025-05-30 01:07:55 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:55.389442 | orchestrator | 2025-05-30 01:07:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:07:58.431000 | orchestrator | 2025-05-30 01:07:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:07:58.431080 | orchestrator | 2025-05-30 01:07:58 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:07:58.431806 | orchestrator | 2025-05-30 01:07:58 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:07:58.432458 | orchestrator | 2025-05-30 01:07:58 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:07:58.436677 | orchestrator | 2025-05-30 01:07:58 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:07:58.437682 | orchestrator | 2025-05-30 01:07:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:01.475215 | orchestrator | 2025-05-30 01:08:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:01.475321 | orchestrator | 2025-05-30 01:08:01 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:01.475337 | orchestrator | 2025-05-30 01:08:01 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:01.475350 | orchestrator | 2025-05-30 01:08:01 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:01.475380 | orchestrator | 2025-05-30 01:08:01 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:01.475394 | orchestrator | 2025-05-30 01:08:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:04.509435 | orchestrator | 2025-05-30 01:08:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:04.509528 | orchestrator | 2025-05-30 01:08:04 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:04.509542 | orchestrator | 2025-05-30 01:08:04 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:04.509554 | orchestrator | 2025-05-30 01:08:04 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:04.509565 | orchestrator | 2025-05-30 01:08:04 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:04.509575 | orchestrator | 2025-05-30 01:08:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:07.532785 | orchestrator | 2025-05-30 01:08:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:07.532885 | orchestrator | 2025-05-30 01:08:07 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:07.532899 | orchestrator | 2025-05-30 01:08:07 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:07.533415 | orchestrator | 2025-05-30 01:08:07 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:07.533974 | orchestrator | 2025-05-30 01:08:07 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:07.533997 | orchestrator | 2025-05-30 01:08:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:10.563072 | orchestrator | 2025-05-30 01:08:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:10.563219 | orchestrator | 2025-05-30 01:08:10 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:10.564327 | orchestrator | 2025-05-30 01:08:10 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:10.564463 | orchestrator | 2025-05-30 01:08:10 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:10.565114 | orchestrator | 2025-05-30 01:08:10 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:10.565202 | orchestrator | 2025-05-30 01:08:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:13.603027 | orchestrator | 2025-05-30 01:08:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:13.604167 | orchestrator | 2025-05-30 01:08:13 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:13.604201 | orchestrator | 2025-05-30 01:08:13 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:13.604528 | orchestrator | 2025-05-30 01:08:13 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:13.605148 | orchestrator | 2025-05-30 01:08:13 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:13.605255 | orchestrator | 2025-05-30 01:08:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:16.653818 | orchestrator | 2025-05-30 01:08:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:16.653941 | orchestrator | 2025-05-30 01:08:16 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:16.654952 | orchestrator | 2025-05-30 01:08:16 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:16.656818 | orchestrator | 2025-05-30 01:08:16 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:16.657158 | orchestrator | 2025-05-30 01:08:16 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:16.657189 | orchestrator | 2025-05-30 01:08:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:19.683804 | orchestrator | 2025-05-30 01:08:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:19.683938 | orchestrator | 2025-05-30 01:08:19 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:19.684294 | orchestrator | 2025-05-30 01:08:19 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:19.684750 | orchestrator | 2025-05-30 01:08:19 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:19.685562 | orchestrator | 2025-05-30 01:08:19 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:19.685656 | orchestrator | 2025-05-30 01:08:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:22.729208 | orchestrator | 2025-05-30 01:08:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:22.729340 | orchestrator | 2025-05-30 01:08:22 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:22.730691 | orchestrator | 2025-05-30 01:08:22 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:22.731014 | orchestrator | 2025-05-30 01:08:22 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:22.731825 | orchestrator | 2025-05-30 01:08:22 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:22.731852 | orchestrator | 2025-05-30 01:08:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:25.786662 | orchestrator | 2025-05-30 01:08:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:25.787488 | orchestrator | 2025-05-30 01:08:25 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:25.790891 | orchestrator | 2025-05-30 01:08:25 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:25.791856 | orchestrator | 2025-05-30 01:08:25 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:25.793351 | orchestrator | 2025-05-30 01:08:25 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:25.793374 | orchestrator | 2025-05-30 01:08:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:28.848149 | orchestrator | 2025-05-30 01:08:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:28.851779 | orchestrator | 2025-05-30 01:08:28 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:28.853613 | orchestrator | 2025-05-30 01:08:28 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:28.855279 | orchestrator | 2025-05-30 01:08:28 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:28.856728 | orchestrator | 2025-05-30 01:08:28 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:28.856764 | orchestrator | 2025-05-30 01:08:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:31.914433 | orchestrator | 2025-05-30 01:08:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:31.914542 | orchestrator | 2025-05-30 01:08:31 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:31.916327 | orchestrator | 2025-05-30 01:08:31 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:31.918455 | orchestrator | 2025-05-30 01:08:31 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:31.919462 | orchestrator | 2025-05-30 01:08:31 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:31.919896 | orchestrator | 2025-05-30 01:08:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:34.981275 | orchestrator | 2025-05-30 01:08:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:34.982912 | orchestrator | 2025-05-30 01:08:34 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:34.984602 | orchestrator | 2025-05-30 01:08:34 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:34.986497 | orchestrator | 2025-05-30 01:08:34 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:34.987833 | orchestrator | 2025-05-30 01:08:34 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:34.987873 | orchestrator | 2025-05-30 01:08:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:38.032778 | orchestrator | 2025-05-30 01:08:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:38.036342 | orchestrator | 2025-05-30 01:08:38 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:38.038791 | orchestrator | 2025-05-30 01:08:38 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:38.038884 | orchestrator | 2025-05-30 01:08:38 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:38.039777 | orchestrator | 2025-05-30 01:08:38 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:38.039812 | orchestrator | 2025-05-30 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:41.072223 | orchestrator | 2025-05-30 01:08:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:41.074729 | orchestrator | 2025-05-30 01:08:41 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:41.076725 | orchestrator | 2025-05-30 01:08:41 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:41.078444 | orchestrator | 2025-05-30 01:08:41 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:41.080140 | orchestrator | 2025-05-30 01:08:41 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:41.080622 | orchestrator | 2025-05-30 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:44.117797 | orchestrator | 2025-05-30 01:08:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:44.118417 | orchestrator | 2025-05-30 01:08:44 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:44.118975 | orchestrator | 2025-05-30 01:08:44 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:44.120700 | orchestrator | 2025-05-30 01:08:44 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:44.121310 | orchestrator | 2025-05-30 01:08:44 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:44.121335 | orchestrator | 2025-05-30 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:47.173568 | orchestrator | 2025-05-30 01:08:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:47.175655 | orchestrator | 2025-05-30 01:08:47 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:47.180865 | orchestrator | 2025-05-30 01:08:47 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:47.182321 | orchestrator | 2025-05-30 01:08:47 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state STARTED 2025-05-30 01:08:47.183579 | orchestrator | 2025-05-30 01:08:47 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:47.183606 | orchestrator | 2025-05-30 01:08:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:50.228240 | orchestrator | 2025-05-30 01:08:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:50.228525 | orchestrator | 2025-05-30 01:08:50 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:50.229110 | orchestrator | 2025-05-30 01:08:50 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:50.231903 | orchestrator | 2025-05-30 01:08:50 | INFO  | Task 5ac27643-0d63-4649-8cd8-0f4867a3e50e is in state SUCCESS 2025-05-30 01:08:50.233121 | orchestrator | 2025-05-30 01:08:50.233148 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-30 01:08:50.233157 | orchestrator | 2025-05-30 01:08:50.233165 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-30 01:08:50.233172 | orchestrator | Friday 30 May 2025 01:01:06 +0000 (0:00:00.164) 0:00:00.164 ************ 2025-05-30 01:08:50.233180 | orchestrator | changed: [localhost] 2025-05-30 01:08:50.233188 | orchestrator | 2025-05-30 01:08:50.233196 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-30 01:08:50.233203 | orchestrator | Friday 30 May 2025 01:01:06 +0000 (0:00:00.543) 0:00:00.708 ************ 2025-05-30 01:08:50.233210 | orchestrator | 2025-05-30 01:08:50.233218 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-30 01:08:50.233224 | orchestrator | 2025-05-30 01:08:50.233232 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-30 01:08:50.233238 | orchestrator | 2025-05-30 01:08:50.233245 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-30 01:08:50.233252 | orchestrator | 2025-05-30 01:08:50.233259 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-30 01:08:50.233299 | orchestrator | 2025-05-30 01:08:50.233322 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-30 01:08:50.233328 | orchestrator | 2025-05-30 01:08:50.233335 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-30 01:08:50.233341 | orchestrator | 2025-05-30 01:08:50.233347 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-30 01:08:50.233353 | orchestrator | 2025-05-30 01:08:50.233359 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-30 01:08:50.233387 | orchestrator | changed: [localhost] 2025-05-30 01:08:50.233394 | orchestrator | 2025-05-30 01:08:50.233411 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-30 01:08:50.233421 | orchestrator | Friday 30 May 2025 01:07:11 +0000 (0:06:05.134) 0:06:05.843 ************ 2025-05-30 01:08:50.233431 | orchestrator | changed: [localhost] 2025-05-30 01:08:50.233440 | orchestrator | 2025-05-30 01:08:50.233450 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:08:50.233460 | orchestrator | 2025-05-30 01:08:50.233504 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:08:50.233510 | orchestrator | Friday 30 May 2025 01:07:25 +0000 (0:00:13.631) 0:06:19.475 ************ 2025-05-30 01:08:50.233517 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:08:50.233523 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:08:50.233529 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:08:50.233535 | orchestrator | 2025-05-30 01:08:50.233541 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:08:50.233547 | orchestrator | Friday 30 May 2025 01:07:26 +0000 (0:00:00.840) 0:06:20.316 ************ 2025-05-30 01:08:50.233646 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-30 01:08:50.233655 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-30 01:08:50.233662 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-30 01:08:50.233668 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-30 01:08:50.233675 | orchestrator | 2025-05-30 01:08:50.233681 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-30 01:08:50.233687 | orchestrator | skipping: no hosts matched 2025-05-30 01:08:50.233694 | orchestrator | 2025-05-30 01:08:50.233701 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:08:50.233731 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:08:50.233739 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:08:50.233746 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:08:50.233752 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:08:50.233810 | orchestrator | 2025-05-30 01:08:50.233817 | orchestrator | 2025-05-30 01:08:50.233823 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:08:50.233829 | orchestrator | Friday 30 May 2025 01:07:27 +0000 (0:00:00.639) 0:06:20.956 ************ 2025-05-30 01:08:50.233835 | orchestrator | =============================================================================== 2025-05-30 01:08:50.233841 | orchestrator | Download ironic-agent initramfs --------------------------------------- 365.13s 2025-05-30 01:08:50.233848 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.63s 2025-05-30 01:08:50.233854 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2025-05-30 01:08:50.233860 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2025-05-30 01:08:50.233866 | orchestrator | Ensure the destination directory exists --------------------------------- 0.54s 2025-05-30 01:08:50.233872 | orchestrator | 2025-05-30 01:08:50.233878 | orchestrator | 2025-05-30 01:08:50.233905 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:08:50.233912 | orchestrator | 2025-05-30 01:08:50.233919 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:08:50.233925 | orchestrator | Friday 30 May 2025 01:04:46 +0000 (0:00:00.319) 0:00:00.319 ************ 2025-05-30 01:08:50.233931 | orchestrator | ok: [testbed-manager] 2025-05-30 01:08:50.233937 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:08:50.233952 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:08:50.233959 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:08:50.233965 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:08:50.233971 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:08:50.233978 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:08:50.233984 | orchestrator | 2025-05-30 01:08:50.234001 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:08:50.234055 | orchestrator | Friday 30 May 2025 01:04:47 +0000 (0:00:01.022) 0:00:01.341 ************ 2025-05-30 01:08:50.234071 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-30 01:08:50.234078 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-30 01:08:50.234143 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-30 01:08:50.234158 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-30 01:08:50.234165 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-30 01:08:50.234171 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-30 01:08:50.234177 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-30 01:08:50.234183 | orchestrator | 2025-05-30 01:08:50.234196 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-30 01:08:50.234202 | orchestrator | 2025-05-30 01:08:50.234209 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-30 01:08:50.234215 | orchestrator | Friday 30 May 2025 01:04:48 +0000 (0:00:01.014) 0:00:02.355 ************ 2025-05-30 01:08:50.234221 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 01:08:50.234229 | orchestrator | 2025-05-30 01:08:50.234235 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-30 01:08:50.234241 | orchestrator | Friday 30 May 2025 01:04:50 +0000 (0:00:01.496) 0:00:03.852 ************ 2025-05-30 01:08:50.234256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.234268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.234275 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-30 01:08:50.234290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.234304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.234315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.234322 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.234328 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234352 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.234372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.234400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.234408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.234418 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.234425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.234432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.234438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.234452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.234465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.234477 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-30 01:08:50.234495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.234505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.234521 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.234528 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.234545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.234551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.234558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.234569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.234619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.234712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.234731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.234756 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.234766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.234773 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.234779 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.234817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.234828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.234835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.234866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.234874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.234885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.234926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.234950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234957 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.234968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.234976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.234985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.234992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.235004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.235043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.235140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235153 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235176 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235183 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.235207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.235219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235257 | orchestrator | 2025-05-30 01:08:50.235264 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-30 01:08:50.235271 | orchestrator | Friday 30 May 2025 01:04:54 +0000 (0:00:04.716) 0:00:08.569 ************ 2025-05-30 01:08:50.235277 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 01:08:50.235284 | orchestrator | 2025-05-30 01:08:50.235296 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-30 01:08:50.235302 | orchestrator | Friday 30 May 2025 01:04:56 +0000 (0:00:01.948) 0:00:10.517 ************ 2025-05-30 01:08:50.235315 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-30 01:08:50.235322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.235328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.235335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.235341 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.235351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.235358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.235373 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235379 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.235386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.235392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.235398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.235409 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-30 01:08:50.235416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.235431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.235451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.235458 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.235464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.235549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.235593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.235617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.235624 | orchestrator | 2025-05-30 01:08:50.235630 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-30 01:08:50.235637 | orchestrator | Friday 30 May 2025 01:05:02 +0000 (0:00:06.286) 0:00:16.804 ************ 2025-05-30 01:08:50.235647 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.235653 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.235660 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.235667 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.235724 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.235738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.235762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.235774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.235807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235814 | orchestrator | skipping: [testbed-manager] 2025-05-30 01:08:50.235820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.235827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.235851 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:08:50.235875 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:08:50.235883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.235893 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:08:50.235900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.235906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.235916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.235923 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.235929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.235936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.235942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.235955 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.235962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.235974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.235981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.235987 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.235993 | orchestrator | 2025-05-30 01:08:50.236000 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-30 01:08:50.236006 | orchestrator | Friday 30 May 2025 01:05:05 +0000 (0:00:02.637) 0:00:19.442 ************ 2025-05-30 01:08:50.236016 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.236023 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.236029 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.236040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.236046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.236057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.236064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.236073 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.236098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.236116 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.236126 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:08:50.236138 | orchestrator | skipping: [testbed-manager] 2025-05-30 01:08:50.236147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.236185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.236197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.236206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.236221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.236232 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:08:50.236242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.236261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.236269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.236275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.236286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.236292 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:08:50.236299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.236309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.236316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.236322 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.236333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.236340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.236346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.236352 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.236359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-30 01:08:50.236369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.236376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.236382 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.236388 | orchestrator | 2025-05-30 01:08:50.236398 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-30 01:08:50.236405 | orchestrator | Friday 30 May 2025 01:05:09 +0000 (0:00:03.392) 0:00:22.834 ************ 2025-05-30 01:08:50.236412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.236423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.236432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.236477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.236489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.236496 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-30 01:08:50.236510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.236517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.236524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.236837 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.236851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.236862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.236876 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.236883 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237004 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237013 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.237019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237031 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.237051 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.237062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.237132 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.237140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.237153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.237170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.237177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.237196 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.237207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.237221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.237228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.237251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.237259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.237272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.237292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.237299 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-30 01:08:50.237309 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.237323 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237330 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.237336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237343 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.237349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.237366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.237376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.237394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.237400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.237407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.237422 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.237429 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.237439 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.237454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.237461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.237477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.237490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.237498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.237505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.237513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.237536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.237555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.237570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.237584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.237606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.237614 | orchestrator | 2025-05-30 01:08:50.237621 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-30 01:08:50.237628 | orchestrator | Friday 30 May 2025 01:05:15 +0000 (0:00:06.979) 0:00:29.814 ************ 2025-05-30 01:08:50.237636 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-30 01:08:50.237643 | orchestrator | 2025-05-30 01:08:50.237650 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-30 01:08:50.237657 | orchestrator | Friday 30 May 2025 01:05:16 +0000 (0:00:00.562) 0:00:30.376 ************ 2025-05-30 01:08:50.237668 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1316084, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3965907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237676 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1316084, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3965907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237683 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1316084, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3965907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237691 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1316084, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3965907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237703 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1316084, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3965907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237714 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1316084, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3965907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237722 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1316094, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3995907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237732 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1316094, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3995907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237740 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1316094, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3995907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237748 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1316094, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3995907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237755 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1316094, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3995907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237766 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1316087, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3975909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237799 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1316084, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3965907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 01:08:50.237808 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1316094, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3995907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237818 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1316087, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3975909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237825 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1316087, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3975909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237832 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1316087, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3975909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237838 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1316087, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3975909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237850 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1316087, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3975909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237860 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1316092, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237867 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1316092, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237876 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1316092, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237883 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1316092, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237890 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1316092, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237900 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1316092, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237906 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1316119, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.406591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237916 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1316101, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4015908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237922 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1316119, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.406591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237933 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1316119, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.406591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237939 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1316094, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3995907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 01:08:50.237946 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1316119, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.406591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237956 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1316119, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.406591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237963 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1316119, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.406591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237972 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1316101, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4015908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237979 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1316090, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237989 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1316101, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4015908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.237996 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1316101, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4015908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238002 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1316101, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4015908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238049 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1316101, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4015908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238058 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1316090, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238703 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1316090, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238800 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1316097, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.400591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238831 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1316090, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238845 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1316090, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238857 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1316090, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238891 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1316116, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4055908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238904 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1316087, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3975909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 01:08:50.238934 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1316097, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.400591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238946 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1316097, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.400591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238963 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1316097, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.400591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238975 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1316097, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.400591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.238994 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1316089, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239005 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1316097, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.400591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239016 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1316106, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.4025908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239029 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:08:50.239047 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1316116, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4055908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239060 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1316116, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4055908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239076 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1316116, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4055908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239120 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1316116, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4055908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239138 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1316116, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4055908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239149 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1316089, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239161 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1316089, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239179 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1316089, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239190 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1316092, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 01:08:50.239206 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1316089, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239218 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1316089, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239236 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1316106, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.4025908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239247 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.239259 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1316106, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.4025908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239270 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.239281 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1316106, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.4025908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239292 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:08:50.239309 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1316106, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.4025908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239321 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.239332 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1316106, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.4025908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-30 01:08:50.239343 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:08:50.239359 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1316119, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.406591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 01:08:50.239377 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1316101, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4015908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 01:08:50.239388 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1316090, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 01:08:50.239400 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1316097, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.400591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 01:08:50.239411 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1316116, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.4055908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 01:08:50.239428 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1316089, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3985908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 01:08:50.239440 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1316106, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.4025908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-30 01:08:50.239458 | orchestrator | 2025-05-30 01:08:50.239470 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-30 01:08:50.239482 | orchestrator | Friday 30 May 2025 01:05:48 +0000 (0:00:31.749) 0:01:02.125 ************ 2025-05-30 01:08:50.239493 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-30 01:08:50.239504 | orchestrator | 2025-05-30 01:08:50.239515 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-30 01:08:50.239531 | orchestrator | Friday 30 May 2025 01:05:48 +0000 (0:00:00.403) 0:01:02.529 ************ 2025-05-30 01:08:50.239542 | orchestrator | [WARNING]: Skipped 2025-05-30 01:08:50.239553 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239564 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-30 01:08:50.239575 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239586 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-30 01:08:50.239597 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-30 01:08:50.239607 | orchestrator | [WARNING]: Skipped 2025-05-30 01:08:50.239618 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239629 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-30 01:08:50.239639 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239650 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-30 01:08:50.239660 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-30 01:08:50.239671 | orchestrator | [WARNING]: Skipped 2025-05-30 01:08:50.239682 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239693 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-30 01:08:50.239704 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239714 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-30 01:08:50.239725 | orchestrator | [WARNING]: Skipped 2025-05-30 01:08:50.239735 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239746 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-30 01:08:50.239757 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239767 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-30 01:08:50.239778 | orchestrator | [WARNING]: Skipped 2025-05-30 01:08:50.239789 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239799 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-30 01:08:50.239810 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239820 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-30 01:08:50.239831 | orchestrator | [WARNING]: Skipped 2025-05-30 01:08:50.239842 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239853 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-30 01:08:50.239863 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239874 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-30 01:08:50.239885 | orchestrator | [WARNING]: Skipped 2025-05-30 01:08:50.239896 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239906 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-30 01:08:50.239917 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-30 01:08:50.239927 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-30 01:08:50.239938 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-30 01:08:50.239949 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-30 01:08:50.239965 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-30 01:08:50.239976 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-30 01:08:50.239987 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-30 01:08:50.239997 | orchestrator | 2025-05-30 01:08:50.240008 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-30 01:08:50.240019 | orchestrator | Friday 30 May 2025 01:05:50 +0000 (0:00:01.335) 0:01:03.865 ************ 2025-05-30 01:08:50.240030 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-30 01:08:50.240041 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:08:50.240057 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-30 01:08:50.240068 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:08:50.240079 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-30 01:08:50.240105 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:08:50.240116 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-30 01:08:50.240127 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.240138 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-30 01:08:50.240149 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.240160 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-30 01:08:50.240170 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.240181 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-30 01:08:50.240192 | orchestrator | 2025-05-30 01:08:50.240203 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-30 01:08:50.240213 | orchestrator | Friday 30 May 2025 01:06:05 +0000 (0:00:15.084) 0:01:18.949 ************ 2025-05-30 01:08:50.240224 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-30 01:08:50.240235 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:08:50.240251 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-30 01:08:50.240262 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:08:50.240273 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-30 01:08:50.240284 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.240295 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-30 01:08:50.240306 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.240316 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-30 01:08:50.240327 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:08:50.240338 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-30 01:08:50.240348 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.240359 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-30 01:08:50.240370 | orchestrator | 2025-05-30 01:08:50.240381 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-30 01:08:50.240392 | orchestrator | Friday 30 May 2025 01:06:10 +0000 (0:00:05.211) 0:01:24.160 ************ 2025-05-30 01:08:50.240403 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-30 01:08:50.240414 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:08:50.240425 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-30 01:08:50.240448 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:08:50.240459 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-30 01:08:50.240470 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:08:50.240481 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-30 01:08:50.240491 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.240502 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-30 01:08:50.240513 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.240524 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-30 01:08:50.240535 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.240545 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-30 01:08:50.240556 | orchestrator | 2025-05-30 01:08:50.240567 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-30 01:08:50.240577 | orchestrator | Friday 30 May 2025 01:06:14 +0000 (0:00:03.956) 0:01:28.117 ************ 2025-05-30 01:08:50.240588 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-30 01:08:50.240599 | orchestrator | 2025-05-30 01:08:50.240609 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-30 01:08:50.240620 | orchestrator | Friday 30 May 2025 01:06:14 +0000 (0:00:00.446) 0:01:28.563 ************ 2025-05-30 01:08:50.240631 | orchestrator | skipping: [testbed-manager] 2025-05-30 01:08:50.240642 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:08:50.240652 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:08:50.240663 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:08:50.240674 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.240684 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.240695 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.240706 | orchestrator | 2025-05-30 01:08:50.240716 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-30 01:08:50.240727 | orchestrator | Friday 30 May 2025 01:06:15 +0000 (0:00:01.049) 0:01:29.612 ************ 2025-05-30 01:08:50.240743 | orchestrator | skipping: [testbed-manager] 2025-05-30 01:08:50.240754 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.240765 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.240775 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.240786 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:08:50.240797 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:08:50.240807 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:08:50.240818 | orchestrator | 2025-05-30 01:08:50.240829 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-30 01:08:50.240840 | orchestrator | Friday 30 May 2025 01:06:19 +0000 (0:00:04.087) 0:01:33.700 ************ 2025-05-30 01:08:50.240851 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-30 01:08:50.240861 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:08:50.240872 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-30 01:08:50.240883 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:08:50.240893 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-30 01:08:50.240904 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:08:50.240915 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-30 01:08:50.240925 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.240936 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-30 01:08:50.240958 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.240973 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-30 01:08:50.240984 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.240995 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-30 01:08:50.241006 | orchestrator | skipping: [testbed-manager] 2025-05-30 01:08:50.241016 | orchestrator | 2025-05-30 01:08:50.241027 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-30 01:08:50.241038 | orchestrator | Friday 30 May 2025 01:06:22 +0000 (0:00:02.735) 0:01:36.436 ************ 2025-05-30 01:08:50.241049 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-30 01:08:50.241059 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:08:50.241070 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-30 01:08:50.241137 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-30 01:08:50.241150 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:08:50.241161 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:08:50.241172 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-30 01:08:50.241183 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.241194 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-30 01:08:50.241204 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.241215 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-30 01:08:50.241226 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.241237 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-30 01:08:50.241248 | orchestrator | 2025-05-30 01:08:50.241259 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-30 01:08:50.241269 | orchestrator | Friday 30 May 2025 01:06:25 +0000 (0:00:02.911) 0:01:39.347 ************ 2025-05-30 01:08:50.241280 | orchestrator | [WARNING]: Skipped 2025-05-30 01:08:50.241291 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-30 01:08:50.241302 | orchestrator | due to this access issue: 2025-05-30 01:08:50.241313 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-30 01:08:50.241323 | orchestrator | not a directory 2025-05-30 01:08:50.241334 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-30 01:08:50.241345 | orchestrator | 2025-05-30 01:08:50.241355 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-30 01:08:50.241366 | orchestrator | Friday 30 May 2025 01:06:27 +0000 (0:00:01.819) 0:01:41.166 ************ 2025-05-30 01:08:50.241377 | orchestrator | skipping: [testbed-manager] 2025-05-30 01:08:50.241388 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:08:50.241398 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:08:50.241409 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:08:50.241420 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.241431 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.241442 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.241453 | orchestrator | 2025-05-30 01:08:50.241464 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-30 01:08:50.241475 | orchestrator | Friday 30 May 2025 01:06:28 +0000 (0:00:01.000) 0:01:42.167 ************ 2025-05-30 01:08:50.241486 | orchestrator | skipping: [testbed-manager] 2025-05-30 01:08:50.241497 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:08:50.241507 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:08:50.241526 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:08:50.241537 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.241548 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.241559 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.241569 | orchestrator | 2025-05-30 01:08:50.241581 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-30 01:08:50.241597 | orchestrator | Friday 30 May 2025 01:06:29 +0000 (0:00:00.805) 0:01:42.972 ************ 2025-05-30 01:08:50.241609 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-30 01:08:50.241620 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-30 01:08:50.241631 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:08:50.241642 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:08:50.241653 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-30 01:08:50.241664 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:08:50.241675 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-30 01:08:50.241686 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.241697 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-30 01:08:50.241708 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.241719 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-30 01:08:50.241730 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.241741 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-30 01:08:50.241752 | orchestrator | skipping: [testbed-manager] 2025-05-30 01:08:50.241763 | orchestrator | 2025-05-30 01:08:50.241774 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-30 01:08:50.241790 | orchestrator | Friday 30 May 2025 01:06:31 +0000 (0:00:02.630) 0:01:45.602 ************ 2025-05-30 01:08:50.241801 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-30 01:08:50.241812 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:08:50.241823 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-30 01:08:50.241834 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:08:50.241845 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-30 01:08:50.241856 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:08:50.241867 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-30 01:08:50.241878 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:08:50.241888 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-30 01:08:50.241899 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:08:50.241910 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-30 01:08:50.241921 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:08:50.241932 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-30 01:08:50.241943 | orchestrator | skipping: [testbed-manager] 2025-05-30 01:08:50.241954 | orchestrator | 2025-05-30 01:08:50.241965 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-30 01:08:50.241976 | orchestrator | Friday 30 May 2025 01:06:34 +0000 (0:00:03.051) 0:01:48.654 ************ 2025-05-30 01:08:50.241988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.242007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.242075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.242114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.242126 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.242138 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-30 01:08:50.242157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.242175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-30 01:08:50.242187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.242203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.242215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242245 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.242257 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.242298 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242311 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.242337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.242402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-30 01:08:50.242421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.242479 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.242491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.242505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.242526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.242567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.242614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.242627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.242649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.242661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.242706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.242718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.242737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.242748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242777 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-30 01:08:50.242794 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.242812 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.242835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.242846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.242864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.242884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.242908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.242939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.242959 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.242973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.242992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.243004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.243026 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.243038 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.243050 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.243061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.243079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.243129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.243155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-30 01:08:50.243167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-30 01:08:50.243180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-30 01:08:50.243192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.243210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.243221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.243245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.243257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.243269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.243280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.243291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.243309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-30 01:08:50.243321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.243339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-30 01:08:50.243356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-30 01:08:50.243367 | orchestrator | 2025-05-30 01:08:50.243379 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-30 01:08:50.243390 | orchestrator | Friday 30 May 2025 01:06:39 +0000 (0:00:05.144) 0:01:53.799 ************ 2025-05-30 01:08:50.243401 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-30 01:08:50.243412 | orchestrator | 2025-05-30 01:08:50.243423 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-30 01:08:50.243434 | orchestrator | Friday 30 May 2025 01:06:43 +0000 (0:00:03.148) 0:01:56.947 ************ 2025-05-30 01:08:50.243445 | orchestrator | 2025-05-30 01:08:50.243456 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-30 01:08:50.243466 | orchestrator | Friday 30 May 2025 01:06:43 +0000 (0:00:00.098) 0:01:57.046 ************ 2025-05-30 01:08:50.243477 | orchestrator | 2025-05-30 01:08:50.243488 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-30 01:08:50.243499 | orchestrator | Friday 30 May 2025 01:06:43 +0000 (0:00:00.427) 0:01:57.473 ************ 2025-05-30 01:08:50.243509 | orchestrator | 2025-05-30 01:08:50.243521 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-30 01:08:50.243531 | orchestrator | Friday 30 May 2025 01:06:43 +0000 (0:00:00.061) 0:01:57.535 ************ 2025-05-30 01:08:50.243542 | orchestrator | 2025-05-30 01:08:50.243553 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-30 01:08:50.243564 | orchestrator | Friday 30 May 2025 01:06:43 +0000 (0:00:00.058) 0:01:57.593 ************ 2025-05-30 01:08:50.243575 | orchestrator | 2025-05-30 01:08:50.243586 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-30 01:08:50.243597 | orchestrator | Friday 30 May 2025 01:06:43 +0000 (0:00:00.063) 0:01:57.656 ************ 2025-05-30 01:08:50.243607 | orchestrator | 2025-05-30 01:08:50.243618 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-30 01:08:50.243629 | orchestrator | Friday 30 May 2025 01:06:44 +0000 (0:00:00.289) 0:01:57.946 ************ 2025-05-30 01:08:50.243640 | orchestrator | 2025-05-30 01:08:50.243651 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-30 01:08:50.243662 | orchestrator | Friday 30 May 2025 01:06:44 +0000 (0:00:00.081) 0:01:58.027 ************ 2025-05-30 01:08:50.243672 | orchestrator | changed: [testbed-manager] 2025-05-30 01:08:50.243683 | orchestrator | 2025-05-30 01:08:50.243694 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-30 01:08:50.243705 | orchestrator | Friday 30 May 2025 01:07:01 +0000 (0:00:17.207) 0:02:15.234 ************ 2025-05-30 01:08:50.243716 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:08:50.243727 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:08:50.243747 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:08:50.243758 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:08:50.243769 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:08:50.243779 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:08:50.243791 | orchestrator | changed: [testbed-manager] 2025-05-30 01:08:50.243802 | orchestrator | 2025-05-30 01:08:50.243812 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-30 01:08:50.243823 | orchestrator | Friday 30 May 2025 01:07:24 +0000 (0:00:22.619) 0:02:37.854 ************ 2025-05-30 01:08:50.243834 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:08:50.243845 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:08:50.243855 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:08:50.243866 | orchestrator | 2025-05-30 01:08:50.243877 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-30 01:08:50.243894 | orchestrator | Friday 30 May 2025 01:07:33 +0000 (0:00:09.533) 0:02:47.388 ************ 2025-05-30 01:08:50.243905 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:08:50.243916 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:08:50.243927 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:08:50.243938 | orchestrator | 2025-05-30 01:08:50.243948 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-30 01:08:50.243959 | orchestrator | Friday 30 May 2025 01:07:44 +0000 (0:00:11.263) 0:02:58.651 ************ 2025-05-30 01:08:50.243986 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:08:50.244007 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:08:50.244019 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:08:50.244030 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:08:50.244041 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:08:50.244051 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:08:50.244062 | orchestrator | changed: [testbed-manager] 2025-05-30 01:08:50.244073 | orchestrator | 2025-05-30 01:08:50.244105 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-30 01:08:50.244118 | orchestrator | Friday 30 May 2025 01:08:05 +0000 (0:00:20.516) 0:03:19.168 ************ 2025-05-30 01:08:50.244129 | orchestrator | changed: [testbed-manager] 2025-05-30 01:08:50.244139 | orchestrator | 2025-05-30 01:08:50.244150 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-30 01:08:50.244161 | orchestrator | Friday 30 May 2025 01:08:14 +0000 (0:00:09.195) 0:03:28.364 ************ 2025-05-30 01:08:50.244172 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:08:50.244183 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:08:50.244193 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:08:50.244204 | orchestrator | 2025-05-30 01:08:50.244215 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-30 01:08:50.244231 | orchestrator | Friday 30 May 2025 01:08:22 +0000 (0:00:08.082) 0:03:36.447 ************ 2025-05-30 01:08:50.244242 | orchestrator | changed: [testbed-manager] 2025-05-30 01:08:50.244254 | orchestrator | 2025-05-30 01:08:50.244264 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-30 01:08:50.244275 | orchestrator | Friday 30 May 2025 01:08:35 +0000 (0:00:12.921) 0:03:49.369 ************ 2025-05-30 01:08:50.244286 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:08:50.244297 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:08:50.244308 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:08:50.244319 | orchestrator | 2025-05-30 01:08:50.244330 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:08:50.244341 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-30 01:08:50.244353 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-30 01:08:50.244365 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-30 01:08:50.244383 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-30 01:08:50.244394 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-30 01:08:50.244405 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-30 01:08:50.244416 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-30 01:08:50.244427 | orchestrator | 2025-05-30 01:08:50.244438 | orchestrator | 2025-05-30 01:08:50.244448 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:08:50.244460 | orchestrator | Friday 30 May 2025 01:08:47 +0000 (0:00:12.327) 0:04:01.696 ************ 2025-05-30 01:08:50.244471 | orchestrator | =============================================================================== 2025-05-30 01:08:50.244481 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 31.75s 2025-05-30 01:08:50.244493 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 22.62s 2025-05-30 01:08:50.244503 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 20.52s 2025-05-30 01:08:50.244514 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.21s 2025-05-30 01:08:50.244525 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.08s 2025-05-30 01:08:50.244535 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 12.92s 2025-05-30 01:08:50.244546 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.33s 2025-05-30 01:08:50.244557 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.26s 2025-05-30 01:08:50.244568 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.54s 2025-05-30 01:08:50.244578 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.20s 2025-05-30 01:08:50.244589 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 8.08s 2025-05-30 01:08:50.244600 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.98s 2025-05-30 01:08:50.244611 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.29s 2025-05-30 01:08:50.244628 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.21s 2025-05-30 01:08:50.244640 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.14s 2025-05-30 01:08:50.244651 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.72s 2025-05-30 01:08:50.244662 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.09s 2025-05-30 01:08:50.244673 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.96s 2025-05-30 01:08:50.244683 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.39s 2025-05-30 01:08:50.244694 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 3.15s 2025-05-30 01:08:50.244705 | orchestrator | 2025-05-30 01:08:50 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:50.244717 | orchestrator | 2025-05-30 01:08:50 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:08:50.244728 | orchestrator | 2025-05-30 01:08:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:53.275912 | orchestrator | 2025-05-30 01:08:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:53.276167 | orchestrator | 2025-05-30 01:08:53 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:53.277732 | orchestrator | 2025-05-30 01:08:53 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:53.278577 | orchestrator | 2025-05-30 01:08:53 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:53.280109 | orchestrator | 2025-05-30 01:08:53 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:08:53.280202 | orchestrator | 2025-05-30 01:08:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:56.329052 | orchestrator | 2025-05-30 01:08:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:56.330388 | orchestrator | 2025-05-30 01:08:56 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:56.332363 | orchestrator | 2025-05-30 01:08:56 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:56.335761 | orchestrator | 2025-05-30 01:08:56 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:56.336139 | orchestrator | 2025-05-30 01:08:56 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:08:56.336378 | orchestrator | 2025-05-30 01:08:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:08:59.398142 | orchestrator | 2025-05-30 01:08:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:08:59.399313 | orchestrator | 2025-05-30 01:08:59 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:08:59.400876 | orchestrator | 2025-05-30 01:08:59 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:08:59.403419 | orchestrator | 2025-05-30 01:08:59 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:08:59.405192 | orchestrator | 2025-05-30 01:08:59 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:08:59.405234 | orchestrator | 2025-05-30 01:08:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:02.442727 | orchestrator | 2025-05-30 01:09:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:02.442817 | orchestrator | 2025-05-30 01:09:02 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:02.443985 | orchestrator | 2025-05-30 01:09:02 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:09:02.444717 | orchestrator | 2025-05-30 01:09:02 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:09:02.445627 | orchestrator | 2025-05-30 01:09:02 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:02.446550 | orchestrator | 2025-05-30 01:09:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:05.492411 | orchestrator | 2025-05-30 01:09:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:05.493085 | orchestrator | 2025-05-30 01:09:05 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:05.494600 | orchestrator | 2025-05-30 01:09:05 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:09:05.500601 | orchestrator | 2025-05-30 01:09:05 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:09:05.500613 | orchestrator | 2025-05-30 01:09:05 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:05.500620 | orchestrator | 2025-05-30 01:09:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:08.548335 | orchestrator | 2025-05-30 01:09:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:08.549286 | orchestrator | 2025-05-30 01:09:08 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:08.551102 | orchestrator | 2025-05-30 01:09:08 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:09:08.551435 | orchestrator | 2025-05-30 01:09:08 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:09:08.552533 | orchestrator | 2025-05-30 01:09:08 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:08.552627 | orchestrator | 2025-05-30 01:09:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:11.595092 | orchestrator | 2025-05-30 01:09:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:11.598316 | orchestrator | 2025-05-30 01:09:11 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:11.599201 | orchestrator | 2025-05-30 01:09:11 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:09:11.599532 | orchestrator | 2025-05-30 01:09:11 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:09:11.600220 | orchestrator | 2025-05-30 01:09:11 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:11.601443 | orchestrator | 2025-05-30 01:09:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:14.645223 | orchestrator | 2025-05-30 01:09:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:14.646826 | orchestrator | 2025-05-30 01:09:14 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:14.647966 | orchestrator | 2025-05-30 01:09:14 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:09:14.649207 | orchestrator | 2025-05-30 01:09:14 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:09:14.651117 | orchestrator | 2025-05-30 01:09:14 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:14.651137 | orchestrator | 2025-05-30 01:09:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:17.708207 | orchestrator | 2025-05-30 01:09:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:17.709742 | orchestrator | 2025-05-30 01:09:17 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:17.710733 | orchestrator | 2025-05-30 01:09:17 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:09:17.711793 | orchestrator | 2025-05-30 01:09:17 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:09:17.712671 | orchestrator | 2025-05-30 01:09:17 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:17.712852 | orchestrator | 2025-05-30 01:09:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:20.765024 | orchestrator | 2025-05-30 01:09:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:20.766560 | orchestrator | 2025-05-30 01:09:20 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:20.768104 | orchestrator | 2025-05-30 01:09:20 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:09:20.770281 | orchestrator | 2025-05-30 01:09:20 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:09:20.771237 | orchestrator | 2025-05-30 01:09:20 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:20.771309 | orchestrator | 2025-05-30 01:09:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:23.817456 | orchestrator | 2025-05-30 01:09:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:23.817774 | orchestrator | 2025-05-30 01:09:23 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:23.819661 | orchestrator | 2025-05-30 01:09:23 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state STARTED 2025-05-30 01:09:23.820561 | orchestrator | 2025-05-30 01:09:23 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state STARTED 2025-05-30 01:09:23.821201 | orchestrator | 2025-05-30 01:09:23 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:23.821661 | orchestrator | 2025-05-30 01:09:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:26.861189 | orchestrator | 2025-05-30 01:09:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:26.863810 | orchestrator | 2025-05-30 01:09:26 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:26.867038 | orchestrator | 2025-05-30 01:09:26 | INFO  | Task b36f4df0-8441-42e4-806e-8f980b6c9772 is in state SUCCESS 2025-05-30 01:09:26.868613 | orchestrator | 2025-05-30 01:09:26.868648 | orchestrator | 2025-05-30 01:09:26.868660 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:09:26.868672 | orchestrator | 2025-05-30 01:09:26.868683 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:09:26.868694 | orchestrator | Friday 30 May 2025 01:06:18 +0000 (0:00:00.651) 0:00:00.651 ************ 2025-05-30 01:09:26.868705 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:09:26.868718 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:09:26.868728 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:09:26.868739 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:09:26.868750 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:09:26.868760 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:09:26.868771 | orchestrator | 2025-05-30 01:09:26.868800 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:09:26.868811 | orchestrator | Friday 30 May 2025 01:06:19 +0000 (0:00:00.671) 0:00:01.323 ************ 2025-05-30 01:09:26.868822 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-30 01:09:26.868839 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-30 01:09:26.868859 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-30 01:09:26.868877 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-30 01:09:26.868896 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-30 01:09:26.868914 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-30 01:09:26.868932 | orchestrator | 2025-05-30 01:09:26.868948 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-30 01:09:26.868965 | orchestrator | 2025-05-30 01:09:26.868985 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-30 01:09:26.869005 | orchestrator | Friday 30 May 2025 01:06:20 +0000 (0:00:00.845) 0:00:02.168 ************ 2025-05-30 01:09:26.869025 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 01:09:26.869046 | orchestrator | 2025-05-30 01:09:26.869091 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-30 01:09:26.869102 | orchestrator | Friday 30 May 2025 01:06:22 +0000 (0:00:01.950) 0:00:04.119 ************ 2025-05-30 01:09:26.869114 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-30 01:09:26.869124 | orchestrator | 2025-05-30 01:09:26.869163 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-30 01:09:26.869175 | orchestrator | Friday 30 May 2025 01:06:25 +0000 (0:00:03.262) 0:00:07.382 ************ 2025-05-30 01:09:26.869185 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-30 01:09:26.869197 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-30 01:09:26.869208 | orchestrator | 2025-05-30 01:09:26.869220 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-30 01:09:26.869233 | orchestrator | Friday 30 May 2025 01:06:31 +0000 (0:00:06.191) 0:00:13.573 ************ 2025-05-30 01:09:26.869246 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-30 01:09:26.869260 | orchestrator | 2025-05-30 01:09:26.869272 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-30 01:09:26.869284 | orchestrator | Friday 30 May 2025 01:06:34 +0000 (0:00:03.387) 0:00:16.960 ************ 2025-05-30 01:09:26.869297 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-30 01:09:26.869309 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-30 01:09:26.869321 | orchestrator | 2025-05-30 01:09:26.869334 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-30 01:09:26.869346 | orchestrator | Friday 30 May 2025 01:06:38 +0000 (0:00:03.890) 0:00:20.851 ************ 2025-05-30 01:09:26.869359 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-30 01:09:26.869371 | orchestrator | 2025-05-30 01:09:26.869383 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-30 01:09:26.869396 | orchestrator | Friday 30 May 2025 01:06:42 +0000 (0:00:03.170) 0:00:24.022 ************ 2025-05-30 01:09:26.869408 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-30 01:09:26.869420 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-30 01:09:26.869432 | orchestrator | 2025-05-30 01:09:26.869445 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-30 01:09:26.869458 | orchestrator | Friday 30 May 2025 01:06:50 +0000 (0:00:08.100) 0:00:32.122 ************ 2025-05-30 01:09:26.869538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.869567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.869594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.869616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.869628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.869647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.869665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.869683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.869695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.869707 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.869718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.869737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.869754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.869772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.869784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.869796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.869807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.869842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.869862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.869874 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.869886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.869897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.869919 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.869938 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.869949 | orchestrator | 2025-05-30 01:09:26.869960 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-30 01:09:26.869971 | orchestrator | Friday 30 May 2025 01:06:52 +0000 (0:00:02.089) 0:00:34.212 ************ 2025-05-30 01:09:26.869982 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.869993 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.870004 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.870090 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 01:09:26.870105 | orchestrator | 2025-05-30 01:09:26.870116 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-30 01:09:26.870126 | orchestrator | Friday 30 May 2025 01:06:53 +0000 (0:00:00.980) 0:00:35.192 ************ 2025-05-30 01:09:26.870137 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-30 01:09:26.870148 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-30 01:09:26.870159 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-30 01:09:26.870169 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-30 01:09:26.870180 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-30 01:09:26.870190 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-30 01:09:26.870201 | orchestrator | 2025-05-30 01:09:26.870212 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-30 01:09:26.870222 | orchestrator | Friday 30 May 2025 01:06:57 +0000 (0:00:04.212) 0:00:39.405 ************ 2025-05-30 01:09:26.870249 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-30 01:09:26.870264 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-30 01:09:26.870299 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-30 01:09:26.870312 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-30 01:09:26.870324 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-30 01:09:26.870335 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-30 01:09:26.870347 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-30 01:09:26.870384 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-30 01:09:26.870397 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-30 01:09:26.870409 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-30 01:09:26.870421 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-30 01:09:26.870437 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-30 01:09:26.870456 | orchestrator | 2025-05-30 01:09:26.870467 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-30 01:09:26.870478 | orchestrator | Friday 30 May 2025 01:07:01 +0000 (0:00:03.984) 0:00:43.389 ************ 2025-05-30 01:09:26.870489 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-30 01:09:26.870500 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-30 01:09:26.870511 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-30 01:09:26.870521 | orchestrator | 2025-05-30 01:09:26.870537 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-30 01:09:26.870548 | orchestrator | Friday 30 May 2025 01:07:05 +0000 (0:00:04.303) 0:00:47.692 ************ 2025-05-30 01:09:26.870559 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-30 01:09:26.870570 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-30 01:09:26.870580 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-30 01:09:26.870591 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-30 01:09:26.870602 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-30 01:09:26.870613 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-30 01:09:26.870623 | orchestrator | 2025-05-30 01:09:26.870634 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-30 01:09:26.870644 | orchestrator | Friday 30 May 2025 01:07:10 +0000 (0:00:04.928) 0:00:52.621 ************ 2025-05-30 01:09:26.870655 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-30 01:09:26.870666 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-30 01:09:26.870676 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-30 01:09:26.870687 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-30 01:09:26.870697 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-30 01:09:26.870708 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-30 01:09:26.870718 | orchestrator | 2025-05-30 01:09:26.870729 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-30 01:09:26.870740 | orchestrator | Friday 30 May 2025 01:07:12 +0000 (0:00:01.997) 0:00:54.618 ************ 2025-05-30 01:09:26.870750 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.870761 | orchestrator | 2025-05-30 01:09:26.870772 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-30 01:09:26.870783 | orchestrator | Friday 30 May 2025 01:07:12 +0000 (0:00:00.339) 0:00:54.958 ************ 2025-05-30 01:09:26.870794 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.870804 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.870815 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.870826 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:09:26.870836 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:09:26.870847 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:09:26.870857 | orchestrator | 2025-05-30 01:09:26.870868 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-30 01:09:26.870885 | orchestrator | Friday 30 May 2025 01:07:14 +0000 (0:00:01.986) 0:00:56.945 ************ 2025-05-30 01:09:26.870897 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 01:09:26.870923 | orchestrator | 2025-05-30 01:09:26.870934 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-30 01:09:26.870944 | orchestrator | Friday 30 May 2025 01:07:16 +0000 (0:00:01.564) 0:00:58.509 ************ 2025-05-30 01:09:26.870956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.870975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.870992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.871004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.871016 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.871034 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.871071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.871089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.871100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.871112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.871131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.871142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.871153 | orchestrator | 2025-05-30 01:09:26.871164 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-30 01:09:26.871174 | orchestrator | Friday 30 May 2025 01:07:19 +0000 (0:00:03.177) 0:01:01.686 ************ 2025-05-30 01:09:26.871197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.871209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871220 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.871232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.871253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.871284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871295 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.871312 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.871337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871400 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:09:26.871417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871454 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:09:26.871499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871531 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:09:26.871552 | orchestrator | 2025-05-30 01:09:26.871563 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-30 01:09:26.871574 | orchestrator | Friday 30 May 2025 01:07:21 +0000 (0:00:01.635) 0:01:03.321 ************ 2025-05-30 01:09:26.871585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.871596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.871627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.871668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871679 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.871689 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.871700 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.871711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871733 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:09:26.871751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871786 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:09:26.871797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871819 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:09:26.871830 | orchestrator | 2025-05-30 01:09:26.871841 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-30 01:09:26.871851 | orchestrator | Friday 30 May 2025 01:07:23 +0000 (0:00:01.900) 0:01:05.221 ************ 2025-05-30 01:09:26.871862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.871881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.871916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.871939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.871951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.871975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.872015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.872039 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872119 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872304 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872357 | orchestrator | 2025-05-30 01:09:26.872368 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-30 01:09:26.872379 | orchestrator | Friday 30 May 2025 01:07:26 +0000 (0:00:03.042) 0:01:08.264 ************ 2025-05-30 01:09:26.872389 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-30 01:09:26.872400 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:09:26.872411 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-30 01:09:26.872422 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-30 01:09:26.872433 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:09:26.872443 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-30 01:09:26.872454 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-30 01:09:26.872465 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:09:26.872475 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-30 01:09:26.872486 | orchestrator | 2025-05-30 01:09:26.872497 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-30 01:09:26.872507 | orchestrator | Friday 30 May 2025 01:07:30 +0000 (0:00:04.129) 0:01:12.393 ************ 2025-05-30 01:09:26.872518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.872530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.872573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.872596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.872619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.872673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.872705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872839 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.872850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.872879 | orchestrator | 2025-05-30 01:09:26.872896 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-30 01:09:26.872907 | orchestrator | Friday 30 May 2025 01:07:42 +0000 (0:00:12.459) 0:01:24.853 ************ 2025-05-30 01:09:26.872918 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.872929 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.872939 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.872950 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:09:26.872960 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:09:26.872971 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:09:26.872981 | orchestrator | 2025-05-30 01:09:26.872992 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-30 01:09:26.873003 | orchestrator | Friday 30 May 2025 01:07:46 +0000 (0:00:03.499) 0:01:28.352 ************ 2025-05-30 01:09:26.873018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.873030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873134 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.873153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.873170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.873228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873256 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.873267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873278 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.873289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.873308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873355 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:09:26.873366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.873378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873960 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:09:26.873977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.873987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.873997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.874073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.874086 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:09:26.874096 | orchestrator | 2025-05-30 01:09:26.874106 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-30 01:09:26.874116 | orchestrator | Friday 30 May 2025 01:07:49 +0000 (0:00:02.923) 0:01:31.276 ************ 2025-05-30 01:09:26.874125 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.874135 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.874144 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.874154 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:09:26.874163 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:09:26.874178 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:09:26.874194 | orchestrator | 2025-05-30 01:09:26.874214 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-30 01:09:26.874238 | orchestrator | Friday 30 May 2025 01:07:50 +0000 (0:00:01.683) 0:01:32.960 ************ 2025-05-30 01:09:26.874267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.874295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.874313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.874342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.874361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-30 01:09:26.874388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.874410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.874429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-30 01:09:26.874439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.874449 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.874466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.874482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.874492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.874509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.874519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.874529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.874545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.874562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.874580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-30 01:09:26.874593 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.874605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.874622 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.874639 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.874657 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-30 01:09:26.874668 | orchestrator | 2025-05-30 01:09:26.874679 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-30 01:09:26.874690 | orchestrator | Friday 30 May 2025 01:07:55 +0000 (0:00:04.700) 0:01:37.660 ************ 2025-05-30 01:09:26.874701 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.874713 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.874723 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.874734 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:09:26.874745 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:09:26.874756 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:09:26.874767 | orchestrator | 2025-05-30 01:09:26.874778 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-30 01:09:26.874790 | orchestrator | Friday 30 May 2025 01:07:56 +0000 (0:00:00.872) 0:01:38.533 ************ 2025-05-30 01:09:26.874801 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:09:26.874812 | orchestrator | 2025-05-30 01:09:26.874823 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-30 01:09:26.874834 | orchestrator | Friday 30 May 2025 01:07:59 +0000 (0:00:02.527) 0:01:41.061 ************ 2025-05-30 01:09:26.874845 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:09:26.874857 | orchestrator | 2025-05-30 01:09:26.874868 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-30 01:09:26.874880 | orchestrator | Friday 30 May 2025 01:08:01 +0000 (0:00:02.316) 0:01:43.378 ************ 2025-05-30 01:09:26.874891 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:09:26.874902 | orchestrator | 2025-05-30 01:09:26.874913 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-30 01:09:26.874924 | orchestrator | Friday 30 May 2025 01:08:20 +0000 (0:00:19.332) 0:02:02.711 ************ 2025-05-30 01:09:26.874934 | orchestrator | 2025-05-30 01:09:26.874943 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-30 01:09:26.874952 | orchestrator | Friday 30 May 2025 01:08:20 +0000 (0:00:00.048) 0:02:02.759 ************ 2025-05-30 01:09:26.874962 | orchestrator | 2025-05-30 01:09:26.874971 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-30 01:09:26.874981 | orchestrator | Friday 30 May 2025 01:08:20 +0000 (0:00:00.141) 0:02:02.900 ************ 2025-05-30 01:09:26.874990 | orchestrator | 2025-05-30 01:09:26.875000 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-30 01:09:26.875009 | orchestrator | Friday 30 May 2025 01:08:20 +0000 (0:00:00.048) 0:02:02.948 ************ 2025-05-30 01:09:26.875018 | orchestrator | 2025-05-30 01:09:26.875028 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-30 01:09:26.875037 | orchestrator | Friday 30 May 2025 01:08:21 +0000 (0:00:00.049) 0:02:02.998 ************ 2025-05-30 01:09:26.875047 | orchestrator | 2025-05-30 01:09:26.875082 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-30 01:09:26.875093 | orchestrator | Friday 30 May 2025 01:08:21 +0000 (0:00:00.051) 0:02:03.050 ************ 2025-05-30 01:09:26.875109 | orchestrator | 2025-05-30 01:09:26.875118 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-30 01:09:26.875133 | orchestrator | Friday 30 May 2025 01:08:21 +0000 (0:00:00.156) 0:02:03.206 ************ 2025-05-30 01:09:26.875143 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:09:26.875153 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:09:26.875162 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:09:26.875172 | orchestrator | 2025-05-30 01:09:26.875181 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-30 01:09:26.875191 | orchestrator | Friday 30 May 2025 01:08:38 +0000 (0:00:17.132) 0:02:20.339 ************ 2025-05-30 01:09:26.875200 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:09:26.875210 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:09:26.875220 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:09:26.875229 | orchestrator | 2025-05-30 01:09:26.875239 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-30 01:09:26.875248 | orchestrator | Friday 30 May 2025 01:08:49 +0000 (0:00:11.008) 0:02:31.347 ************ 2025-05-30 01:09:26.875258 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:09:26.875267 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:09:26.875277 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:09:26.875286 | orchestrator | 2025-05-30 01:09:26.875296 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-30 01:09:26.875305 | orchestrator | Friday 30 May 2025 01:09:14 +0000 (0:00:24.794) 0:02:56.142 ************ 2025-05-30 01:09:26.875315 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:09:26.875324 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:09:26.875334 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:09:26.875344 | orchestrator | 2025-05-30 01:09:26.875367 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-30 01:09:26.875383 | orchestrator | Friday 30 May 2025 01:09:25 +0000 (0:00:10.885) 0:03:07.027 ************ 2025-05-30 01:09:26.875399 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.875414 | orchestrator | 2025-05-30 01:09:26.875429 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:09:26.875445 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-30 01:09:26.875462 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-30 01:09:26.875480 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-30 01:09:26.875493 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-30 01:09:26.875502 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-30 01:09:26.875512 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-30 01:09:26.875521 | orchestrator | 2025-05-30 01:09:26.875530 | orchestrator | 2025-05-30 01:09:26.875540 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:09:26.875549 | orchestrator | Friday 30 May 2025 01:09:25 +0000 (0:00:00.578) 0:03:07.606 ************ 2025-05-30 01:09:26.875559 | orchestrator | =============================================================================== 2025-05-30 01:09:26.875568 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 24.79s 2025-05-30 01:09:26.875577 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.33s 2025-05-30 01:09:26.875587 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 17.13s 2025-05-30 01:09:26.875608 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.46s 2025-05-30 01:09:26.875618 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.01s 2025-05-30 01:09:26.875627 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.89s 2025-05-30 01:09:26.875636 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.10s 2025-05-30 01:09:26.875646 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.19s 2025-05-30 01:09:26.875655 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.93s 2025-05-30 01:09:26.875664 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.70s 2025-05-30 01:09:26.875674 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 4.30s 2025-05-30 01:09:26.875683 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 4.21s 2025-05-30 01:09:26.875692 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 4.13s 2025-05-30 01:09:26.875702 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.98s 2025-05-30 01:09:26.875711 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.89s 2025-05-30 01:09:26.875720 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.50s 2025-05-30 01:09:26.875730 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.39s 2025-05-30 01:09:26.875739 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.26s 2025-05-30 01:09:26.875748 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.18s 2025-05-30 01:09:26.875758 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.17s 2025-05-30 01:09:26.875773 | orchestrator | 2025-05-30 01:09:26.875783 | orchestrator | 2025-05-30 01:09:26.875792 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:09:26.875802 | orchestrator | 2025-05-30 01:09:26.875811 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:09:26.875820 | orchestrator | Friday 30 May 2025 01:06:10 +0000 (0:00:00.319) 0:00:00.319 ************ 2025-05-30 01:09:26.875830 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:09:26.875839 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:09:26.875849 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:09:26.875858 | orchestrator | 2025-05-30 01:09:26.875868 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:09:26.875878 | orchestrator | Friday 30 May 2025 01:06:10 +0000 (0:00:00.392) 0:00:00.712 ************ 2025-05-30 01:09:26.875895 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-30 01:09:26.875921 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-30 01:09:26.875938 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-30 01:09:26.875953 | orchestrator | 2025-05-30 01:09:26.875969 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-30 01:09:26.875985 | orchestrator | 2025-05-30 01:09:26.876002 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-30 01:09:26.876014 | orchestrator | Friday 30 May 2025 01:06:11 +0000 (0:00:00.416) 0:00:01.128 ************ 2025-05-30 01:09:26.876030 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:09:26.876040 | orchestrator | 2025-05-30 01:09:26.876080 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-30 01:09:26.876091 | orchestrator | Friday 30 May 2025 01:06:12 +0000 (0:00:01.460) 0:00:02.589 ************ 2025-05-30 01:09:26.876100 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-30 01:09:26.876110 | orchestrator | 2025-05-30 01:09:26.876119 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-30 01:09:26.876129 | orchestrator | Friday 30 May 2025 01:06:15 +0000 (0:00:03.212) 0:00:05.801 ************ 2025-05-30 01:09:26.876172 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-30 01:09:26.876182 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-30 01:09:26.876192 | orchestrator | 2025-05-30 01:09:26.876201 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-30 01:09:26.876211 | orchestrator | Friday 30 May 2025 01:06:22 +0000 (0:00:06.374) 0:00:12.176 ************ 2025-05-30 01:09:26.876220 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-30 01:09:26.876230 | orchestrator | 2025-05-30 01:09:26.876239 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-30 01:09:26.876248 | orchestrator | Friday 30 May 2025 01:06:25 +0000 (0:00:03.215) 0:00:15.392 ************ 2025-05-30 01:09:26.876258 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-30 01:09:26.876267 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-30 01:09:26.876277 | orchestrator | 2025-05-30 01:09:26.876286 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-30 01:09:26.876295 | orchestrator | Friday 30 May 2025 01:06:29 +0000 (0:00:03.745) 0:00:19.137 ************ 2025-05-30 01:09:26.876305 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-30 01:09:26.876314 | orchestrator | 2025-05-30 01:09:26.876324 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-30 01:09:26.876333 | orchestrator | Friday 30 May 2025 01:06:32 +0000 (0:00:03.454) 0:00:22.591 ************ 2025-05-30 01:09:26.876343 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-30 01:09:26.876352 | orchestrator | 2025-05-30 01:09:26.876361 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-30 01:09:26.876371 | orchestrator | Friday 30 May 2025 01:06:36 +0000 (0:00:04.175) 0:00:26.767 ************ 2025-05-30 01:09:26.876393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 01:09:26.876412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 01:09:26.876430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 01:09:26.876453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 01:09:26.876472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 01:09:26.876494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 01:09:26.876512 | orchestrator | 2025-05-30 01:09:26.876522 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-30 01:09:26.876531 | orchestrator | Friday 30 May 2025 01:06:40 +0000 (0:00:03.982) 0:00:30.750 ************ 2025-05-30 01:09:26.876541 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:09:26.876551 | orchestrator | 2025-05-30 01:09:26.876560 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-30 01:09:26.876570 | orchestrator | Friday 30 May 2025 01:06:41 +0000 (0:00:00.674) 0:00:31.424 ************ 2025-05-30 01:09:26.876579 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:09:26.876588 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:09:26.876598 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:09:26.876607 | orchestrator | 2025-05-30 01:09:26.876617 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-30 01:09:26.876626 | orchestrator | Friday 30 May 2025 01:06:48 +0000 (0:00:07.500) 0:00:38.924 ************ 2025-05-30 01:09:26.876636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-30 01:09:26.876645 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-30 01:09:26.876655 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-30 01:09:26.876665 | orchestrator | 2025-05-30 01:09:26.876674 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-30 01:09:26.876683 | orchestrator | Friday 30 May 2025 01:06:50 +0000 (0:00:01.862) 0:00:40.787 ************ 2025-05-30 01:09:26.876693 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-30 01:09:26.876702 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-30 01:09:26.876712 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-30 01:09:26.876721 | orchestrator | 2025-05-30 01:09:26.876731 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-30 01:09:26.876740 | orchestrator | Friday 30 May 2025 01:06:52 +0000 (0:00:01.329) 0:00:42.117 ************ 2025-05-30 01:09:26.876750 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:09:26.876759 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:09:26.876769 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:09:26.876778 | orchestrator | 2025-05-30 01:09:26.876788 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-30 01:09:26.876797 | orchestrator | Friday 30 May 2025 01:06:52 +0000 (0:00:00.757) 0:00:42.874 ************ 2025-05-30 01:09:26.876807 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.876816 | orchestrator | 2025-05-30 01:09:26.876825 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-30 01:09:26.876835 | orchestrator | Friday 30 May 2025 01:06:52 +0000 (0:00:00.120) 0:00:42.995 ************ 2025-05-30 01:09:26.876844 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.876854 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.876869 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.876878 | orchestrator | 2025-05-30 01:09:26.876887 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-30 01:09:26.876897 | orchestrator | Friday 30 May 2025 01:06:53 +0000 (0:00:00.450) 0:00:43.446 ************ 2025-05-30 01:09:26.876906 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:09:26.876916 | orchestrator | 2025-05-30 01:09:26.876925 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-30 01:09:26.876934 | orchestrator | Friday 30 May 2025 01:06:54 +0000 (0:00:00.950) 0:00:44.396 ************ 2025-05-30 01:09:26.876956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 01:09:26.876968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 01:09:26.876996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 01:09:26.877008 | orchestrator | 2025-05-30 01:09:26.877018 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-30 01:09:26.877027 | orchestrator | Friday 30 May 2025 01:06:59 +0000 (0:00:05.664) 0:00:50.061 ************ 2025-05-30 01:09:26.877038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-30 01:09:26.877063 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.877080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-30 01:09:26.877098 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.877113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-30 01:09:26.877124 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.877134 | orchestrator | 2025-05-30 01:09:26.877143 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-30 01:09:26.877153 | orchestrator | Friday 30 May 2025 01:07:08 +0000 (0:00:08.415) 0:00:58.476 ************ 2025-05-30 01:09:26.877168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-30 01:09:26.877189 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.877203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-30 01:09:26.877214 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.877224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-30 01:09:26.877240 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.877249 | orchestrator | 2025-05-30 01:09:26.877259 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-30 01:09:26.877268 | orchestrator | Friday 30 May 2025 01:07:15 +0000 (0:00:07.600) 0:01:06.076 ************ 2025-05-30 01:09:26.877278 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.877288 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.877297 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.877306 | orchestrator | 2025-05-30 01:09:26.877320 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-30 01:09:26.877330 | orchestrator | Friday 30 May 2025 01:07:20 +0000 (0:00:04.052) 0:01:10.129 ************ 2025-05-30 01:09:26.877345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 01:09:26.877356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 01:09:26.877441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 01:09:26.877456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 01:09:26.877479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 01:09:26.877496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 01:09:26.877512 | orchestrator | 2025-05-30 01:09:26.877522 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-30 01:09:26.877532 | orchestrator | Friday 30 May 2025 01:07:24 +0000 (0:00:04.545) 0:01:14.674 ************ 2025-05-30 01:09:26.877541 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:09:26.877551 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:09:26.877560 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:09:26.877569 | orchestrator | 2025-05-30 01:09:26.877579 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-30 01:09:26.877588 | orchestrator | Friday 30 May 2025 01:07:42 +0000 (0:00:17.511) 0:01:32.186 ************ 2025-05-30 01:09:26.877598 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.877607 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.877616 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.877626 | orchestrator | 2025-05-30 01:09:26.877635 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-30 01:09:26.877644 | orchestrator | Friday 30 May 2025 01:07:54 +0000 (0:00:12.782) 0:01:44.969 ************ 2025-05-30 01:09:26.877654 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.877663 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.877673 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.877682 | orchestrator | 2025-05-30 01:09:26.877692 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-30 01:09:26.877701 | orchestrator | Friday 30 May 2025 01:08:00 +0000 (0:00:05.887) 0:01:50.856 ************ 2025-05-30 01:09:26.877711 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.877720 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.877729 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.877739 | orchestrator | 2025-05-30 01:09:26.877748 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-30 01:09:26.877758 | orchestrator | Friday 30 May 2025 01:08:06 +0000 (0:00:05.602) 0:01:56.459 ************ 2025-05-30 01:09:26.877767 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.877781 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.877791 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.877800 | orchestrator | 2025-05-30 01:09:26.877810 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-30 01:09:26.877819 | orchestrator | Friday 30 May 2025 01:08:11 +0000 (0:00:05.171) 0:02:01.630 ************ 2025-05-30 01:09:26.877828 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.877838 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.877847 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.877857 | orchestrator | 2025-05-30 01:09:26.877866 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-30 01:09:26.877875 | orchestrator | Friday 30 May 2025 01:08:11 +0000 (0:00:00.232) 0:02:01.863 ************ 2025-05-30 01:09:26.877885 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-30 01:09:26.877895 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.877904 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-30 01:09:26.877914 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.877923 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-30 01:09:26.877933 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.877942 | orchestrator | 2025-05-30 01:09:26.877952 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-30 01:09:26.877971 | orchestrator | Friday 30 May 2025 01:08:15 +0000 (0:00:03.864) 0:02:05.728 ************ 2025-05-30 01:09:26.877990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 01:09:26.878073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 01:09:26.878109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 01:09:26.878151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 01:09:26.878177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-30 01:09:26.878204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-30 01:09:26.878215 | orchestrator | 2025-05-30 01:09:26.878225 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-30 01:09:26.878234 | orchestrator | Friday 30 May 2025 01:08:20 +0000 (0:00:05.343) 0:02:11.071 ************ 2025-05-30 01:09:26.878244 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:09:26.878253 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:09:26.878263 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:09:26.878272 | orchestrator | 2025-05-30 01:09:26.878286 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-30 01:09:26.878296 | orchestrator | Friday 30 May 2025 01:08:21 +0000 (0:00:00.348) 0:02:11.420 ************ 2025-05-30 01:09:26.878306 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:09:26.878315 | orchestrator | 2025-05-30 01:09:26.878325 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-30 01:09:26.878334 | orchestrator | Friday 30 May 2025 01:08:23 +0000 (0:00:02.264) 0:02:13.685 ************ 2025-05-30 01:09:26.878343 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:09:26.878363 | orchestrator | 2025-05-30 01:09:26.878373 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-30 01:09:26.878382 | orchestrator | Friday 30 May 2025 01:08:26 +0000 (0:00:02.546) 0:02:16.231 ************ 2025-05-30 01:09:26.878392 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:09:26.878401 | orchestrator | 2025-05-30 01:09:26.878410 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-30 01:09:26.878420 | orchestrator | Friday 30 May 2025 01:08:28 +0000 (0:00:02.128) 0:02:18.359 ************ 2025-05-30 01:09:26.878429 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:09:26.878439 | orchestrator | 2025-05-30 01:09:26.878448 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-30 01:09:26.878458 | orchestrator | Friday 30 May 2025 01:08:53 +0000 (0:00:24.944) 0:02:43.303 ************ 2025-05-30 01:09:26.878467 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:09:26.878476 | orchestrator | 2025-05-30 01:09:26.878486 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-30 01:09:26.878502 | orchestrator | Friday 30 May 2025 01:08:55 +0000 (0:00:02.022) 0:02:45.326 ************ 2025-05-30 01:09:26.878519 | orchestrator | 2025-05-30 01:09:26.878535 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-30 01:09:26.878550 | orchestrator | Friday 30 May 2025 01:08:55 +0000 (0:00:00.051) 0:02:45.378 ************ 2025-05-30 01:09:26.878565 | orchestrator | 2025-05-30 01:09:26.878581 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-30 01:09:26.878596 | orchestrator | Friday 30 May 2025 01:08:55 +0000 (0:00:00.048) 0:02:45.426 ************ 2025-05-30 01:09:26.878614 | orchestrator | 2025-05-30 01:09:26.878630 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-30 01:09:26.878646 | orchestrator | Friday 30 May 2025 01:08:55 +0000 (0:00:00.132) 0:02:45.558 ************ 2025-05-30 01:09:26.878658 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:09:26.878668 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:09:26.878677 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:09:26.878686 | orchestrator | 2025-05-30 01:09:26.878696 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:09:26.878706 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-30 01:09:26.878716 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-30 01:09:26.878725 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-30 01:09:26.878735 | orchestrator | 2025-05-30 01:09:26.878744 | orchestrator | 2025-05-30 01:09:26.878754 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:09:26.878763 | orchestrator | Friday 30 May 2025 01:09:25 +0000 (0:00:30.528) 0:03:16.087 ************ 2025-05-30 01:09:26.878772 | orchestrator | =============================================================================== 2025-05-30 01:09:26.878782 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.53s 2025-05-30 01:09:26.878792 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 24.94s 2025-05-30 01:09:26.878801 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 17.51s 2025-05-30 01:09:26.878811 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 12.78s 2025-05-30 01:09:26.878820 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 8.42s 2025-05-30 01:09:26.878830 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 7.60s 2025-05-30 01:09:26.878839 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 7.50s 2025-05-30 01:09:26.878849 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.37s 2025-05-30 01:09:26.878866 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.89s 2025-05-30 01:09:26.878876 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.66s 2025-05-30 01:09:26.878885 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.60s 2025-05-30 01:09:26.878895 | orchestrator | glance : Check glance containers ---------------------------------------- 5.34s 2025-05-30 01:09:26.878904 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.17s 2025-05-30 01:09:26.878913 | orchestrator | glance : Copying over config.json files for services -------------------- 4.55s 2025-05-30 01:09:26.878923 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.18s 2025-05-30 01:09:26.878932 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.05s 2025-05-30 01:09:26.878942 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.98s 2025-05-30 01:09:26.878951 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.86s 2025-05-30 01:09:26.878961 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.75s 2025-05-30 01:09:26.878976 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.45s 2025-05-30 01:09:26.878986 | orchestrator | 2025-05-30 01:09:26 | INFO  | Task 460b2d4d-c4da-40e3-8777-ca6ccbdd1165 is in state SUCCESS 2025-05-30 01:09:26.878995 | orchestrator | 2025-05-30 01:09:26 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:26.879005 | orchestrator | 2025-05-30 01:09:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:29.933208 | orchestrator | 2025-05-30 01:09:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:29.936036 | orchestrator | 2025-05-30 01:09:29 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:29.936757 | orchestrator | 2025-05-30 01:09:29 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:09:29.940395 | orchestrator | 2025-05-30 01:09:29 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:09:29.941382 | orchestrator | 2025-05-30 01:09:29 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:29.941594 | orchestrator | 2025-05-30 01:09:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:32.984301 | orchestrator | 2025-05-30 01:09:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:32.986196 | orchestrator | 2025-05-30 01:09:32 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:32.987043 | orchestrator | 2025-05-30 01:09:32 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:09:32.988346 | orchestrator | 2025-05-30 01:09:32 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:09:32.989040 | orchestrator | 2025-05-30 01:09:32 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:32.989327 | orchestrator | 2025-05-30 01:09:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:36.036340 | orchestrator | 2025-05-30 01:09:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:36.038594 | orchestrator | 2025-05-30 01:09:36 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:36.039994 | orchestrator | 2025-05-30 01:09:36 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:09:36.042005 | orchestrator | 2025-05-30 01:09:36 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:09:36.042813 | orchestrator | 2025-05-30 01:09:36 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:36.043109 | orchestrator | 2025-05-30 01:09:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:39.088708 | orchestrator | 2025-05-30 01:09:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:39.090364 | orchestrator | 2025-05-30 01:09:39 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:39.090924 | orchestrator | 2025-05-30 01:09:39 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:09:39.091687 | orchestrator | 2025-05-30 01:09:39 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:09:39.092764 | orchestrator | 2025-05-30 01:09:39 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:39.092786 | orchestrator | 2025-05-30 01:09:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:42.130127 | orchestrator | 2025-05-30 01:09:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:42.130846 | orchestrator | 2025-05-30 01:09:42 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:42.132281 | orchestrator | 2025-05-30 01:09:42 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:09:42.133757 | orchestrator | 2025-05-30 01:09:42 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:09:42.134975 | orchestrator | 2025-05-30 01:09:42 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:42.135004 | orchestrator | 2025-05-30 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:45.189526 | orchestrator | 2025-05-30 01:09:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:45.191285 | orchestrator | 2025-05-30 01:09:45 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:45.191317 | orchestrator | 2025-05-30 01:09:45 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:09:45.192218 | orchestrator | 2025-05-30 01:09:45 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:09:45.194711 | orchestrator | 2025-05-30 01:09:45 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:45.195093 | orchestrator | 2025-05-30 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:48.269665 | orchestrator | 2025-05-30 01:09:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:48.271918 | orchestrator | 2025-05-30 01:09:48 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:48.273363 | orchestrator | 2025-05-30 01:09:48 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:09:48.275131 | orchestrator | 2025-05-30 01:09:48 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:09:48.276695 | orchestrator | 2025-05-30 01:09:48 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:48.276745 | orchestrator | 2025-05-30 01:09:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:51.320812 | orchestrator | 2025-05-30 01:09:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:51.322176 | orchestrator | 2025-05-30 01:09:51 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:51.324199 | orchestrator | 2025-05-30 01:09:51 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:09:51.325024 | orchestrator | 2025-05-30 01:09:51 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:09:51.325943 | orchestrator | 2025-05-30 01:09:51 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:51.325963 | orchestrator | 2025-05-30 01:09:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:54.378723 | orchestrator | 2025-05-30 01:09:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:54.378860 | orchestrator | 2025-05-30 01:09:54 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:54.380852 | orchestrator | 2025-05-30 01:09:54 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:09:54.382605 | orchestrator | 2025-05-30 01:09:54 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:09:54.387944 | orchestrator | 2025-05-30 01:09:54 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:54.387999 | orchestrator | 2025-05-30 01:09:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:09:57.427746 | orchestrator | 2025-05-30 01:09:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:09:57.427937 | orchestrator | 2025-05-30 01:09:57 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:09:57.428213 | orchestrator | 2025-05-30 01:09:57 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:09:57.429368 | orchestrator | 2025-05-30 01:09:57 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:09:57.429876 | orchestrator | 2025-05-30 01:09:57 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:09:57.431543 | orchestrator | 2025-05-30 01:09:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:00.475925 | orchestrator | 2025-05-30 01:10:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:00.476990 | orchestrator | 2025-05-30 01:10:00 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:00.478321 | orchestrator | 2025-05-30 01:10:00 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:00.479840 | orchestrator | 2025-05-30 01:10:00 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:10:00.481295 | orchestrator | 2025-05-30 01:10:00 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:00.481315 | orchestrator | 2025-05-30 01:10:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:03.532250 | orchestrator | 2025-05-30 01:10:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:03.533273 | orchestrator | 2025-05-30 01:10:03 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:03.536014 | orchestrator | 2025-05-30 01:10:03 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:03.536888 | orchestrator | 2025-05-30 01:10:03 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:10:03.538408 | orchestrator | 2025-05-30 01:10:03 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:03.538431 | orchestrator | 2025-05-30 01:10:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:06.601834 | orchestrator | 2025-05-30 01:10:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:06.602369 | orchestrator | 2025-05-30 01:10:06 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:06.603392 | orchestrator | 2025-05-30 01:10:06 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:06.604417 | orchestrator | 2025-05-30 01:10:06 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:10:06.605614 | orchestrator | 2025-05-30 01:10:06 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:06.605637 | orchestrator | 2025-05-30 01:10:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:09.658523 | orchestrator | 2025-05-30 01:10:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:09.658651 | orchestrator | 2025-05-30 01:10:09 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:09.658668 | orchestrator | 2025-05-30 01:10:09 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:09.659815 | orchestrator | 2025-05-30 01:10:09 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:10:09.661097 | orchestrator | 2025-05-30 01:10:09 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:09.661117 | orchestrator | 2025-05-30 01:10:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:12.709008 | orchestrator | 2025-05-30 01:10:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:12.709892 | orchestrator | 2025-05-30 01:10:12 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:12.710863 | orchestrator | 2025-05-30 01:10:12 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:12.712089 | orchestrator | 2025-05-30 01:10:12 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:10:12.714281 | orchestrator | 2025-05-30 01:10:12 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:12.714612 | orchestrator | 2025-05-30 01:10:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:15.775540 | orchestrator | 2025-05-30 01:10:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:15.776542 | orchestrator | 2025-05-30 01:10:15 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:15.778134 | orchestrator | 2025-05-30 01:10:15 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:15.778805 | orchestrator | 2025-05-30 01:10:15 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:10:15.780829 | orchestrator | 2025-05-30 01:10:15 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:15.780887 | orchestrator | 2025-05-30 01:10:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:18.844722 | orchestrator | 2025-05-30 01:10:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:18.848381 | orchestrator | 2025-05-30 01:10:18 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:18.851602 | orchestrator | 2025-05-30 01:10:18 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:18.854377 | orchestrator | 2025-05-30 01:10:18 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:10:18.856182 | orchestrator | 2025-05-30 01:10:18 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:18.856344 | orchestrator | 2025-05-30 01:10:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:21.923159 | orchestrator | 2025-05-30 01:10:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:21.924394 | orchestrator | 2025-05-30 01:10:21 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:21.927866 | orchestrator | 2025-05-30 01:10:21 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:21.930669 | orchestrator | 2025-05-30 01:10:21 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state STARTED 2025-05-30 01:10:21.934700 | orchestrator | 2025-05-30 01:10:21 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:21.934726 | orchestrator | 2025-05-30 01:10:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:24.986482 | orchestrator | 2025-05-30 01:10:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:24.988305 | orchestrator | 2025-05-30 01:10:24 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:24.992505 | orchestrator | 2025-05-30 01:10:24 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:24.997365 | orchestrator | 2025-05-30 01:10:24 | INFO  | Task 3f0793fd-b9c4-476c-b53e-ab95006f61fc is in state SUCCESS 2025-05-30 01:10:24.998353 | orchestrator | 2025-05-30 01:10:24 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:24.998382 | orchestrator | 2025-05-30 01:10:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:28.061184 | orchestrator | 2025-05-30 01:10:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:28.062761 | orchestrator | 2025-05-30 01:10:28 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:28.064754 | orchestrator | 2025-05-30 01:10:28 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:28.066898 | orchestrator | 2025-05-30 01:10:28 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:28.066946 | orchestrator | 2025-05-30 01:10:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:31.123377 | orchestrator | 2025-05-30 01:10:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:31.125565 | orchestrator | 2025-05-30 01:10:31 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:31.126940 | orchestrator | 2025-05-30 01:10:31 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:31.129432 | orchestrator | 2025-05-30 01:10:31 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:31.129531 | orchestrator | 2025-05-30 01:10:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:34.177978 | orchestrator | 2025-05-30 01:10:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:34.179696 | orchestrator | 2025-05-30 01:10:34 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:34.181574 | orchestrator | 2025-05-30 01:10:34 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:34.183889 | orchestrator | 2025-05-30 01:10:34 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:34.183950 | orchestrator | 2025-05-30 01:10:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:37.236446 | orchestrator | 2025-05-30 01:10:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:37.238202 | orchestrator | 2025-05-30 01:10:37 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:37.241953 | orchestrator | 2025-05-30 01:10:37 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:37.243652 | orchestrator | 2025-05-30 01:10:37 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:37.243692 | orchestrator | 2025-05-30 01:10:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:40.294822 | orchestrator | 2025-05-30 01:10:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:40.295368 | orchestrator | 2025-05-30 01:10:40 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:40.297089 | orchestrator | 2025-05-30 01:10:40 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:40.298802 | orchestrator | 2025-05-30 01:10:40 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:40.298855 | orchestrator | 2025-05-30 01:10:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:43.348693 | orchestrator | 2025-05-30 01:10:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:43.348813 | orchestrator | 2025-05-30 01:10:43 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:43.348839 | orchestrator | 2025-05-30 01:10:43 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:43.349689 | orchestrator | 2025-05-30 01:10:43 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:43.349718 | orchestrator | 2025-05-30 01:10:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:46.438386 | orchestrator | 2025-05-30 01:10:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:46.439556 | orchestrator | 2025-05-30 01:10:46 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:46.448588 | orchestrator | 2025-05-30 01:10:46 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:46.449148 | orchestrator | 2025-05-30 01:10:46 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:46.450141 | orchestrator | 2025-05-30 01:10:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:49.501697 | orchestrator | 2025-05-30 01:10:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:49.502631 | orchestrator | 2025-05-30 01:10:49 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:49.504337 | orchestrator | 2025-05-30 01:10:49 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:49.505199 | orchestrator | 2025-05-30 01:10:49 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:49.505488 | orchestrator | 2025-05-30 01:10:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:52.568652 | orchestrator | 2025-05-30 01:10:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:52.570724 | orchestrator | 2025-05-30 01:10:52 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:52.570771 | orchestrator | 2025-05-30 01:10:52 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:52.570784 | orchestrator | 2025-05-30 01:10:52 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:52.570825 | orchestrator | 2025-05-30 01:10:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:55.612459 | orchestrator | 2025-05-30 01:10:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:55.613198 | orchestrator | 2025-05-30 01:10:55 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:55.617047 | orchestrator | 2025-05-30 01:10:55 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:55.618476 | orchestrator | 2025-05-30 01:10:55 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:55.618622 | orchestrator | 2025-05-30 01:10:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:10:58.673416 | orchestrator | 2025-05-30 01:10:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:10:58.674559 | orchestrator | 2025-05-30 01:10:58 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:10:58.677044 | orchestrator | 2025-05-30 01:10:58 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:10:58.678391 | orchestrator | 2025-05-30 01:10:58 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:10:58.678417 | orchestrator | 2025-05-30 01:10:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:01.735197 | orchestrator | 2025-05-30 01:11:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:01.738283 | orchestrator | 2025-05-30 01:11:01 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:01.740083 | orchestrator | 2025-05-30 01:11:01 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:11:01.742928 | orchestrator | 2025-05-30 01:11:01 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:11:01.743351 | orchestrator | 2025-05-30 01:11:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:04.799475 | orchestrator | 2025-05-30 01:11:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:04.801075 | orchestrator | 2025-05-30 01:11:04 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:04.802114 | orchestrator | 2025-05-30 01:11:04 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:11:04.803324 | orchestrator | 2025-05-30 01:11:04 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:11:04.803372 | orchestrator | 2025-05-30 01:11:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:07.868067 | orchestrator | 2025-05-30 01:11:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:07.868545 | orchestrator | 2025-05-30 01:11:07 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:07.869947 | orchestrator | 2025-05-30 01:11:07 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:11:07.871865 | orchestrator | 2025-05-30 01:11:07 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:11:07.871956 | orchestrator | 2025-05-30 01:11:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:10.928090 | orchestrator | 2025-05-30 01:11:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:10.931804 | orchestrator | 2025-05-30 01:11:10 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:10.933480 | orchestrator | 2025-05-30 01:11:10 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:11:10.934875 | orchestrator | 2025-05-30 01:11:10 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:11:10.934898 | orchestrator | 2025-05-30 01:11:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:13.993335 | orchestrator | 2025-05-30 01:11:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:13.994782 | orchestrator | 2025-05-30 01:11:13 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:13.996199 | orchestrator | 2025-05-30 01:11:13 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:11:13.996944 | orchestrator | 2025-05-30 01:11:13 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:11:13.997011 | orchestrator | 2025-05-30 01:11:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:17.043297 | orchestrator | 2025-05-30 01:11:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:17.043728 | orchestrator | 2025-05-30 01:11:17 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:17.044664 | orchestrator | 2025-05-30 01:11:17 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:11:17.045669 | orchestrator | 2025-05-30 01:11:17 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:11:17.045693 | orchestrator | 2025-05-30 01:11:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:20.092637 | orchestrator | 2025-05-30 01:11:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:20.093758 | orchestrator | 2025-05-30 01:11:20 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:20.095114 | orchestrator | 2025-05-30 01:11:20 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:11:20.096794 | orchestrator | 2025-05-30 01:11:20 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:11:20.097230 | orchestrator | 2025-05-30 01:11:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:23.135162 | orchestrator | 2025-05-30 01:11:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:23.136247 | orchestrator | 2025-05-30 01:11:23 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:23.138107 | orchestrator | 2025-05-30 01:11:23 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state STARTED 2025-05-30 01:11:23.139083 | orchestrator | 2025-05-30 01:11:23 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:11:23.139161 | orchestrator | 2025-05-30 01:11:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:26.189231 | orchestrator | 2025-05-30 01:11:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:26.191351 | orchestrator | 2025-05-30 01:11:26 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:26.194422 | orchestrator | 2025-05-30 01:11:26 | INFO  | Task bc74eb95-bcb1-4d09-9c04-51ee3d8cdc10 is in state SUCCESS 2025-05-30 01:11:26.197851 | orchestrator | 2025-05-30 01:11:26.197893 | orchestrator | 2025-05-30 01:11:26.197906 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:11:26.197918 | orchestrator | 2025-05-30 01:11:26.197930 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:11:26.197994 | orchestrator | Friday 30 May 2025 01:09:29 +0000 (0:00:00.298) 0:00:00.298 ************ 2025-05-30 01:11:26.198008 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:11:26.198058 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:11:26.198070 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:11:26.198081 | orchestrator | 2025-05-30 01:11:26.198219 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:11:26.198263 | orchestrator | Friday 30 May 2025 01:09:29 +0000 (0:00:00.411) 0:00:00.709 ************ 2025-05-30 01:11:26.198276 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-30 01:11:26.198287 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-30 01:11:26.198298 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-30 01:11:26.198309 | orchestrator | 2025-05-30 01:11:26.198407 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-30 01:11:26.198421 | orchestrator | 2025-05-30 01:11:26.198432 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-30 01:11:26.198445 | orchestrator | Friday 30 May 2025 01:09:29 +0000 (0:00:00.287) 0:00:00.996 ************ 2025-05-30 01:11:26.198457 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:11:26.198471 | orchestrator | 2025-05-30 01:11:26.198483 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-30 01:11:26.198496 | orchestrator | Friday 30 May 2025 01:09:30 +0000 (0:00:00.751) 0:00:01.748 ************ 2025-05-30 01:11:26.198509 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-30 01:11:26.198522 | orchestrator | 2025-05-30 01:11:26.198534 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-30 01:11:26.198547 | orchestrator | Friday 30 May 2025 01:09:33 +0000 (0:00:03.208) 0:00:04.956 ************ 2025-05-30 01:11:26.198560 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-30 01:11:26.198572 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-30 01:11:26.198585 | orchestrator | 2025-05-30 01:11:26.198597 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-30 01:11:26.198610 | orchestrator | Friday 30 May 2025 01:09:40 +0000 (0:00:06.330) 0:00:11.286 ************ 2025-05-30 01:11:26.198622 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-30 01:11:26.198635 | orchestrator | 2025-05-30 01:11:26.198648 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-30 01:11:26.198661 | orchestrator | Friday 30 May 2025 01:09:43 +0000 (0:00:03.216) 0:00:14.502 ************ 2025-05-30 01:11:26.198673 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-30 01:11:26.198685 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-30 01:11:26.198698 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-30 01:11:26.198710 | orchestrator | 2025-05-30 01:11:26.198723 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-30 01:11:26.198735 | orchestrator | Friday 30 May 2025 01:09:51 +0000 (0:00:08.070) 0:00:22.573 ************ 2025-05-30 01:11:26.198747 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-30 01:11:26.198759 | orchestrator | 2025-05-30 01:11:26.198772 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-30 01:11:26.198783 | orchestrator | Friday 30 May 2025 01:09:54 +0000 (0:00:03.236) 0:00:25.809 ************ 2025-05-30 01:11:26.198794 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-30 01:11:26.198804 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-30 01:11:26.198816 | orchestrator | 2025-05-30 01:11:26.198827 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-30 01:11:26.198837 | orchestrator | Friday 30 May 2025 01:10:02 +0000 (0:00:07.832) 0:00:33.642 ************ 2025-05-30 01:11:26.198858 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-30 01:11:26.198869 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-30 01:11:26.198879 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-30 01:11:26.198890 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-30 01:11:26.198901 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-30 01:11:26.198912 | orchestrator | 2025-05-30 01:11:26.198922 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-30 01:11:26.198933 | orchestrator | Friday 30 May 2025 01:10:17 +0000 (0:00:15.109) 0:00:48.752 ************ 2025-05-30 01:11:26.198944 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:11:26.198955 | orchestrator | 2025-05-30 01:11:26.199001 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-30 01:11:26.199012 | orchestrator | Friday 30 May 2025 01:10:18 +0000 (0:00:00.854) 0:00:49.607 ************ 2025-05-30 01:11:26.199042 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-05-30 01:11:26.199058 | orchestrator | 2025-05-30 01:11:26.199070 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:11:26.199139 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-30 01:11:26.199154 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:11:26.199166 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:11:26.199177 | orchestrator | 2025-05-30 01:11:26.199188 | orchestrator | 2025-05-30 01:11:26.199205 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:11:26.199216 | orchestrator | Friday 30 May 2025 01:10:21 +0000 (0:00:03.272) 0:00:52.879 ************ 2025-05-30 01:11:26.199227 | orchestrator | =============================================================================== 2025-05-30 01:11:26.199237 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.11s 2025-05-30 01:11:26.199248 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.07s 2025-05-30 01:11:26.199259 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.83s 2025-05-30 01:11:26.199270 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.33s 2025-05-30 01:11:26.199280 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.27s 2025-05-30 01:11:26.199291 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.24s 2025-05-30 01:11:26.199302 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.22s 2025-05-30 01:11:26.199313 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.21s 2025-05-30 01:11:26.199324 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.85s 2025-05-30 01:11:26.199334 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.75s 2025-05-30 01:11:26.199345 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-05-30 01:11:26.199356 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2025-05-30 01:11:26.199367 | orchestrator | 2025-05-30 01:11:26.199385 | orchestrator | 2025-05-30 01:11:26.199396 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:11:26.199407 | orchestrator | 2025-05-30 01:11:26.199418 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:11:26.199428 | orchestrator | Friday 30 May 2025 01:09:29 +0000 (0:00:00.298) 0:00:00.298 ************ 2025-05-30 01:11:26.199439 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:11:26.199450 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:11:26.199461 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:11:26.199471 | orchestrator | 2025-05-30 01:11:26.199482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:11:26.199493 | orchestrator | Friday 30 May 2025 01:09:29 +0000 (0:00:00.365) 0:00:00.663 ************ 2025-05-30 01:11:26.199504 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-30 01:11:26.199515 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-30 01:11:26.199526 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-30 01:11:26.199537 | orchestrator | 2025-05-30 01:11:26.199548 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-30 01:11:26.199558 | orchestrator | 2025-05-30 01:11:26.199569 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-30 01:11:26.199580 | orchestrator | Friday 30 May 2025 01:09:30 +0000 (0:00:00.285) 0:00:00.949 ************ 2025-05-30 01:11:26.199591 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:11:26.199602 | orchestrator | 2025-05-30 01:11:26.199612 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-30 01:11:26.199623 | orchestrator | Friday 30 May 2025 01:09:30 +0000 (0:00:00.698) 0:00:01.648 ************ 2025-05-30 01:11:26.199636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.199694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.199714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.199725 | orchestrator | 2025-05-30 01:11:26.199736 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-30 01:11:26.199755 | orchestrator | Friday 30 May 2025 01:09:31 +0000 (0:00:00.797) 0:00:02.445 ************ 2025-05-30 01:11:26.199766 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-30 01:11:26.199777 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-30 01:11:26.199787 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-30 01:11:26.199798 | orchestrator | 2025-05-30 01:11:26.199809 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-30 01:11:26.199820 | orchestrator | Friday 30 May 2025 01:09:32 +0000 (0:00:00.510) 0:00:02.955 ************ 2025-05-30 01:11:26.199831 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:11:26.199841 | orchestrator | 2025-05-30 01:11:26.199852 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-30 01:11:26.199863 | orchestrator | Friday 30 May 2025 01:09:32 +0000 (0:00:00.574) 0:00:03.529 ************ 2025-05-30 01:11:26.199874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.199886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.199898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.199910 | orchestrator | 2025-05-30 01:11:26.199920 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-30 01:11:26.199931 | orchestrator | Friday 30 May 2025 01:09:33 +0000 (0:00:01.335) 0:00:04.865 ************ 2025-05-30 01:11:26.199950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-30 01:11:26.199990 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:11:26.200008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-30 01:11:26.200019 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:11:26.200030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-30 01:11:26.200042 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:11:26.200053 | orchestrator | 2025-05-30 01:11:26.200064 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-30 01:11:26.200074 | orchestrator | Friday 30 May 2025 01:09:34 +0000 (0:00:00.643) 0:00:05.508 ************ 2025-05-30 01:11:26.200086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-30 01:11:26.200097 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:11:26.200108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-30 01:11:26.200120 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:11:26.200151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-30 01:11:26.200173 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:11:26.200184 | orchestrator | 2025-05-30 01:11:26.200195 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-30 01:11:26.200206 | orchestrator | Friday 30 May 2025 01:09:35 +0000 (0:00:00.654) 0:00:06.162 ************ 2025-05-30 01:11:26.200222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.200234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.200246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.200257 | orchestrator | 2025-05-30 01:11:26.200268 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-30 01:11:26.200279 | orchestrator | Friday 30 May 2025 01:09:36 +0000 (0:00:01.405) 0:00:07.568 ************ 2025-05-30 01:11:26.200290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.200308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.200327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.200344 | orchestrator | 2025-05-30 01:11:26.200355 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-30 01:11:26.200366 | orchestrator | Friday 30 May 2025 01:09:38 +0000 (0:00:01.655) 0:00:09.223 ************ 2025-05-30 01:11:26.200377 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:11:26.200388 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:11:26.200398 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:11:26.200409 | orchestrator | 2025-05-30 01:11:26.200420 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-30 01:11:26.200431 | orchestrator | Friday 30 May 2025 01:09:38 +0000 (0:00:00.286) 0:00:09.509 ************ 2025-05-30 01:11:26.200441 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-30 01:11:26.200452 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-30 01:11:26.200463 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-30 01:11:26.200474 | orchestrator | 2025-05-30 01:11:26.200484 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-30 01:11:26.200495 | orchestrator | Friday 30 May 2025 01:09:40 +0000 (0:00:01.397) 0:00:10.907 ************ 2025-05-30 01:11:26.200506 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-30 01:11:26.200517 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-30 01:11:26.200528 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-30 01:11:26.200539 | orchestrator | 2025-05-30 01:11:26.200550 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-30 01:11:26.200560 | orchestrator | Friday 30 May 2025 01:09:41 +0000 (0:00:01.379) 0:00:12.286 ************ 2025-05-30 01:11:26.200571 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-30 01:11:26.200582 | orchestrator | 2025-05-30 01:11:26.200593 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-30 01:11:26.200603 | orchestrator | Friday 30 May 2025 01:09:41 +0000 (0:00:00.436) 0:00:12.722 ************ 2025-05-30 01:11:26.200614 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-30 01:11:26.200625 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-30 01:11:26.200636 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:11:26.200647 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:11:26.200657 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:11:26.200668 | orchestrator | 2025-05-30 01:11:26.200679 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-30 01:11:26.200690 | orchestrator | Friday 30 May 2025 01:09:42 +0000 (0:00:00.867) 0:00:13.590 ************ 2025-05-30 01:11:26.200700 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:11:26.200711 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:11:26.200728 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:11:26.200739 | orchestrator | 2025-05-30 01:11:26.200750 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-30 01:11:26.200760 | orchestrator | Friday 30 May 2025 01:09:43 +0000 (0:00:00.428) 0:00:14.018 ************ 2025-05-30 01:11:26.200772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1315689, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.30059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.200791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1315689, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.30059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.200815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1315689, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.30059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.200827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1315655, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.27959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.200838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1315655, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.27959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.200849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1315655, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.27959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.200867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1315638, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.27559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.200884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1315638, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.27559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.200896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1315638, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.27559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.200912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1315682, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2955902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.200924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1315682, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2955902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.200935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1315682, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2955902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.200952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1315625, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.26859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1315625, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.26859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1315625, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.26859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1315648, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2765899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1315648, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2765899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1315648, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2765899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1315662, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.29459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1315662, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.29459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1315662, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.29459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1315620, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2675898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1315620, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2675898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1315620, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2675898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1315603, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2605898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1315603, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2605898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1315603, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2605898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1315628, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.26959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1315628, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.26959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1315628, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.26959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1315612, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2635899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1315612, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2635899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1315612, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2635899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1315660, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.28059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1315660, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.28059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1315660, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.28059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1315631, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.27159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1315631, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.27159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.201925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1315631, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.27159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1315685, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.29659, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1315685, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.29659, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1315685, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.29659, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1315618, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.26559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1315618, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.26559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1315618, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.26559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1315651, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.27859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1315651, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.27859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1315651, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.27859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1315606, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.26259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1315606, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.26259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1315606, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.26259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1315614, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2645898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1315614, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2645898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1315614, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2645898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1315635, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2735898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1315635, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2735898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1315635, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.2735898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1315764, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3745906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1315764, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3745906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1315764, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3745906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1315756, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3155901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1315756, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3155901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1315756, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3155901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1316022, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3815906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1316022, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3815906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1316022, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3815906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1315703, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3015902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1315703, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3015902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1315703, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3015902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1316032, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3845906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1316032, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3845906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1316032, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3845906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1316000, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3755906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1316000, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3755906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1316000, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3755906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1316006, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3765907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1316006, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3765907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1316006, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3765907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1315707, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3025901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1315707, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3025901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1315707, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3025901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1315762, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3165903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1315762, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3165903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1315762, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3165903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1316044, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3855908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1316044, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3855908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1316044, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3855908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1316009, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3795907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1316009, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3795907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1316009, 'dev': 188, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748564122.3795907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1315715, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3075902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1315715, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3075902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1315715, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3075902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1315710, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3025901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1315710, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3025901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1315710, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3025901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1315728, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.30959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1315728, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.30959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1315728, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.30959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1315732, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3145902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1315732, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3145902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1315732, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3145902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1316048, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3945909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1316048, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3945909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1316048, 'dev': 188, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748564122.3945909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-30 01:11:26.202814 | orchestrator | 2025-05-30 01:11:26.202823 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-30 01:11:26.202831 | orchestrator | Friday 30 May 2025 01:10:16 +0000 (0:00:32.904) 0:00:46.923 ************ 2025-05-30 01:11:26.202839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.202848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.202856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-30 01:11:26.202869 | orchestrator | 2025-05-30 01:11:26.202877 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-30 01:11:26.202885 | orchestrator | Friday 30 May 2025 01:10:17 +0000 (0:00:01.081) 0:00:48.004 ************ 2025-05-30 01:11:26.202893 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:11:26.202901 | orchestrator | 2025-05-30 01:11:26.202909 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-30 01:11:26.202921 | orchestrator | Friday 30 May 2025 01:10:19 +0000 (0:00:02.709) 0:00:50.714 ************ 2025-05-30 01:11:26.202929 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:11:26.202937 | orchestrator | 2025-05-30 01:11:26.202945 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-30 01:11:26.202953 | orchestrator | Friday 30 May 2025 01:10:22 +0000 (0:00:02.239) 0:00:52.953 ************ 2025-05-30 01:11:26.202978 | orchestrator | 2025-05-30 01:11:26.202986 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-30 01:11:26.202994 | orchestrator | Friday 30 May 2025 01:10:22 +0000 (0:00:00.063) 0:00:53.017 ************ 2025-05-30 01:11:26.203002 | orchestrator | 2025-05-30 01:11:26.203010 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-30 01:11:26.203018 | orchestrator | Friday 30 May 2025 01:10:22 +0000 (0:00:00.069) 0:00:53.086 ************ 2025-05-30 01:11:26.203025 | orchestrator | 2025-05-30 01:11:26.203033 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-30 01:11:26.203041 | orchestrator | Friday 30 May 2025 01:10:22 +0000 (0:00:00.191) 0:00:53.278 ************ 2025-05-30 01:11:26.203053 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:11:26.203061 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:11:26.203069 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:11:26.203077 | orchestrator | 2025-05-30 01:11:26.203085 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-30 01:11:26.203092 | orchestrator | Friday 30 May 2025 01:10:24 +0000 (0:00:02.006) 0:00:55.284 ************ 2025-05-30 01:11:26.203100 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:11:26.203108 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:11:26.203116 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-30 01:11:26.203124 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-30 01:11:26.203132 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-05-30 01:11:26.203140 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:11:26.203148 | orchestrator | 2025-05-30 01:11:26.203156 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-30 01:11:26.203163 | orchestrator | Friday 30 May 2025 01:11:02 +0000 (0:00:38.429) 0:01:33.713 ************ 2025-05-30 01:11:26.203171 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:11:26.203179 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:11:26.203187 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:11:26.203195 | orchestrator | 2025-05-30 01:11:26.203203 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-30 01:11:26.203211 | orchestrator | Friday 30 May 2025 01:11:18 +0000 (0:00:15.541) 0:01:49.255 ************ 2025-05-30 01:11:26.203218 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:11:26.203226 | orchestrator | 2025-05-30 01:11:26.203234 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-30 01:11:26.203242 | orchestrator | Friday 30 May 2025 01:11:20 +0000 (0:00:02.200) 0:01:51.456 ************ 2025-05-30 01:11:26.203250 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:11:26.203258 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:11:26.203266 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:11:26.203274 | orchestrator | 2025-05-30 01:11:26.203286 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-30 01:11:26.203295 | orchestrator | Friday 30 May 2025 01:11:20 +0000 (0:00:00.413) 0:01:51.869 ************ 2025-05-30 01:11:26.203304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-30 01:11:26.203312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-30 01:11:26.203321 | orchestrator | 2025-05-30 01:11:26.203329 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-30 01:11:26.203337 | orchestrator | Friday 30 May 2025 01:11:23 +0000 (0:00:02.327) 0:01:54.197 ************ 2025-05-30 01:11:26.203345 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:11:26.203353 | orchestrator | 2025-05-30 01:11:26.203361 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:11:26.203369 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-30 01:11:26.203377 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-30 01:11:26.203385 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-30 01:11:26.203393 | orchestrator | 2025-05-30 01:11:26.203401 | orchestrator | 2025-05-30 01:11:26.203409 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:11:26.203416 | orchestrator | Friday 30 May 2025 01:11:23 +0000 (0:00:00.359) 0:01:54.557 ************ 2025-05-30 01:11:26.203424 | orchestrator | =============================================================================== 2025-05-30 01:11:26.203436 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.43s 2025-05-30 01:11:26.203445 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 32.90s 2025-05-30 01:11:26.203453 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 15.54s 2025-05-30 01:11:26.203460 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.71s 2025-05-30 01:11:26.203468 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.33s 2025-05-30 01:11:26.203476 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.24s 2025-05-30 01:11:26.203484 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.20s 2025-05-30 01:11:26.203492 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.01s 2025-05-30 01:11:26.203500 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.66s 2025-05-30 01:11:26.203508 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.41s 2025-05-30 01:11:26.203519 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.40s 2025-05-30 01:11:26.203527 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.38s 2025-05-30 01:11:26.203535 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.34s 2025-05-30 01:11:26.203543 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.08s 2025-05-30 01:11:26.203551 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.87s 2025-05-30 01:11:26.203559 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.80s 2025-05-30 01:11:26.203571 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.70s 2025-05-30 01:11:26.203579 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.65s 2025-05-30 01:11:26.203587 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.64s 2025-05-30 01:11:26.203595 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.57s 2025-05-30 01:11:26.203603 | orchestrator | 2025-05-30 01:11:26 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state STARTED 2025-05-30 01:11:26.203611 | orchestrator | 2025-05-30 01:11:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:29.246720 | orchestrator | 2025-05-30 01:11:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:29.247319 | orchestrator | 2025-05-30 01:11:29 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:29.248314 | orchestrator | 2025-05-30 01:11:29 | INFO  | Task 037837e3-b4f5-427a-9c91-81d5b5d8f56c is in state SUCCESS 2025-05-30 01:11:29.248447 | orchestrator | 2025-05-30 01:11:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:32.299221 | orchestrator | 2025-05-30 01:11:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:32.301912 | orchestrator | 2025-05-30 01:11:32 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:32.301995 | orchestrator | 2025-05-30 01:11:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:35.353601 | orchestrator | 2025-05-30 01:11:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:35.354280 | orchestrator | 2025-05-30 01:11:35 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:35.354326 | orchestrator | 2025-05-30 01:11:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:38.412106 | orchestrator | 2025-05-30 01:11:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:38.413490 | orchestrator | 2025-05-30 01:11:38 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:38.413520 | orchestrator | 2025-05-30 01:11:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:41.472012 | orchestrator | 2025-05-30 01:11:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:41.472114 | orchestrator | 2025-05-30 01:11:41 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:41.472129 | orchestrator | 2025-05-30 01:11:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:44.516266 | orchestrator | 2025-05-30 01:11:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:44.516374 | orchestrator | 2025-05-30 01:11:44 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:44.516390 | orchestrator | 2025-05-30 01:11:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:47.567676 | orchestrator | 2025-05-30 01:11:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:47.577354 | orchestrator | 2025-05-30 01:11:47 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:47.577446 | orchestrator | 2025-05-30 01:11:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:50.619626 | orchestrator | 2025-05-30 01:11:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:50.623243 | orchestrator | 2025-05-30 01:11:50 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:50.623308 | orchestrator | 2025-05-30 01:11:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:53.667352 | orchestrator | 2025-05-30 01:11:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:53.669504 | orchestrator | 2025-05-30 01:11:53 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:53.669711 | orchestrator | 2025-05-30 01:11:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:56.721856 | orchestrator | 2025-05-30 01:11:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:56.725139 | orchestrator | 2025-05-30 01:11:56 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:56.725224 | orchestrator | 2025-05-30 01:11:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:11:59.778861 | orchestrator | 2025-05-30 01:11:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:11:59.778997 | orchestrator | 2025-05-30 01:11:59 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:11:59.779014 | orchestrator | 2025-05-30 01:11:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:02.844321 | orchestrator | 2025-05-30 01:12:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:02.845464 | orchestrator | 2025-05-30 01:12:02 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:02.845495 | orchestrator | 2025-05-30 01:12:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:05.896866 | orchestrator | 2025-05-30 01:12:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:05.898989 | orchestrator | 2025-05-30 01:12:05 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:05.899037 | orchestrator | 2025-05-30 01:12:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:08.948545 | orchestrator | 2025-05-30 01:12:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:08.951068 | orchestrator | 2025-05-30 01:12:08 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:08.951168 | orchestrator | 2025-05-30 01:12:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:12.011374 | orchestrator | 2025-05-30 01:12:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:12.012182 | orchestrator | 2025-05-30 01:12:12 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:12.012216 | orchestrator | 2025-05-30 01:12:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:15.050648 | orchestrator | 2025-05-30 01:12:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:15.051775 | orchestrator | 2025-05-30 01:12:15 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:15.051808 | orchestrator | 2025-05-30 01:12:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:18.095234 | orchestrator | 2025-05-30 01:12:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:18.096892 | orchestrator | 2025-05-30 01:12:18 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:18.096957 | orchestrator | 2025-05-30 01:12:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:21.152341 | orchestrator | 2025-05-30 01:12:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:21.154088 | orchestrator | 2025-05-30 01:12:21 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:21.154117 | orchestrator | 2025-05-30 01:12:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:24.197065 | orchestrator | 2025-05-30 01:12:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:24.197758 | orchestrator | 2025-05-30 01:12:24 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:24.197789 | orchestrator | 2025-05-30 01:12:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:27.244097 | orchestrator | 2025-05-30 01:12:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:27.245779 | orchestrator | 2025-05-30 01:12:27 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:27.245811 | orchestrator | 2025-05-30 01:12:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:30.297975 | orchestrator | 2025-05-30 01:12:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:30.299847 | orchestrator | 2025-05-30 01:12:30 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:30.299881 | orchestrator | 2025-05-30 01:12:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:33.337967 | orchestrator | 2025-05-30 01:12:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:33.338135 | orchestrator | 2025-05-30 01:12:33 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:33.338154 | orchestrator | 2025-05-30 01:12:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:36.374667 | orchestrator | 2025-05-30 01:12:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:36.375681 | orchestrator | 2025-05-30 01:12:36 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:36.375722 | orchestrator | 2025-05-30 01:12:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:39.416401 | orchestrator | 2025-05-30 01:12:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:39.416507 | orchestrator | 2025-05-30 01:12:39 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:39.416523 | orchestrator | 2025-05-30 01:12:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:42.454324 | orchestrator | 2025-05-30 01:12:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:42.455574 | orchestrator | 2025-05-30 01:12:42 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:42.455642 | orchestrator | 2025-05-30 01:12:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:45.499396 | orchestrator | 2025-05-30 01:12:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:45.500354 | orchestrator | 2025-05-30 01:12:45 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:45.500465 | orchestrator | 2025-05-30 01:12:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:48.548568 | orchestrator | 2025-05-30 01:12:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:48.550178 | orchestrator | 2025-05-30 01:12:48 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:48.550226 | orchestrator | 2025-05-30 01:12:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:51.592328 | orchestrator | 2025-05-30 01:12:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:51.592429 | orchestrator | 2025-05-30 01:12:51 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:51.592445 | orchestrator | 2025-05-30 01:12:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:54.632968 | orchestrator | 2025-05-30 01:12:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:54.634654 | orchestrator | 2025-05-30 01:12:54 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:54.634734 | orchestrator | 2025-05-30 01:12:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:12:57.672583 | orchestrator | 2025-05-30 01:12:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:12:57.673002 | orchestrator | 2025-05-30 01:12:57 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:12:57.673040 | orchestrator | 2025-05-30 01:12:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:00.731447 | orchestrator | 2025-05-30 01:13:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:00.734182 | orchestrator | 2025-05-30 01:13:00 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:00.734973 | orchestrator | 2025-05-30 01:13:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:03.795176 | orchestrator | 2025-05-30 01:13:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:03.795708 | orchestrator | 2025-05-30 01:13:03 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:03.795753 | orchestrator | 2025-05-30 01:13:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:06.844953 | orchestrator | 2025-05-30 01:13:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:06.846553 | orchestrator | 2025-05-30 01:13:06 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:06.846586 | orchestrator | 2025-05-30 01:13:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:09.916867 | orchestrator | 2025-05-30 01:13:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:09.917038 | orchestrator | 2025-05-30 01:13:09 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:09.917054 | orchestrator | 2025-05-30 01:13:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:12.967251 | orchestrator | 2025-05-30 01:13:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:12.969262 | orchestrator | 2025-05-30 01:13:12 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:12.969433 | orchestrator | 2025-05-30 01:13:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:16.017968 | orchestrator | 2025-05-30 01:13:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:16.018458 | orchestrator | 2025-05-30 01:13:16 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:16.018493 | orchestrator | 2025-05-30 01:13:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:19.053842 | orchestrator | 2025-05-30 01:13:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:19.054520 | orchestrator | 2025-05-30 01:13:19 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:19.054569 | orchestrator | 2025-05-30 01:13:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:22.105749 | orchestrator | 2025-05-30 01:13:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:22.106527 | orchestrator | 2025-05-30 01:13:22 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:22.106616 | orchestrator | 2025-05-30 01:13:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:25.153693 | orchestrator | 2025-05-30 01:13:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:25.157236 | orchestrator | 2025-05-30 01:13:25 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:25.157295 | orchestrator | 2025-05-30 01:13:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:28.211538 | orchestrator | 2025-05-30 01:13:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:28.213149 | orchestrator | 2025-05-30 01:13:28 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:28.213181 | orchestrator | 2025-05-30 01:13:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:31.268632 | orchestrator | 2025-05-30 01:13:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:31.270759 | orchestrator | 2025-05-30 01:13:31 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:31.270802 | orchestrator | 2025-05-30 01:13:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:34.325757 | orchestrator | 2025-05-30 01:13:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:34.327141 | orchestrator | 2025-05-30 01:13:34 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:34.327203 | orchestrator | 2025-05-30 01:13:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:37.382500 | orchestrator | 2025-05-30 01:13:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:37.384552 | orchestrator | 2025-05-30 01:13:37 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:37.384587 | orchestrator | 2025-05-30 01:13:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:40.434563 | orchestrator | 2025-05-30 01:13:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:40.435751 | orchestrator | 2025-05-30 01:13:40 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:40.435786 | orchestrator | 2025-05-30 01:13:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:43.490227 | orchestrator | 2025-05-30 01:13:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:43.492075 | orchestrator | 2025-05-30 01:13:43 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:43.492111 | orchestrator | 2025-05-30 01:13:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:46.554085 | orchestrator | 2025-05-30 01:13:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:46.554248 | orchestrator | 2025-05-30 01:13:46 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:46.554266 | orchestrator | 2025-05-30 01:13:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:49.616854 | orchestrator | 2025-05-30 01:13:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:49.617031 | orchestrator | 2025-05-30 01:13:49 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:49.617048 | orchestrator | 2025-05-30 01:13:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:52.659822 | orchestrator | 2025-05-30 01:13:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:52.660204 | orchestrator | 2025-05-30 01:13:52 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:52.660244 | orchestrator | 2025-05-30 01:13:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:55.703526 | orchestrator | 2025-05-30 01:13:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:55.704127 | orchestrator | 2025-05-30 01:13:55 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:55.704600 | orchestrator | 2025-05-30 01:13:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:13:58.751671 | orchestrator | 2025-05-30 01:13:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:13:58.752568 | orchestrator | 2025-05-30 01:13:58 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:13:58.752598 | orchestrator | 2025-05-30 01:13:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:01.802718 | orchestrator | 2025-05-30 01:14:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:01.803675 | orchestrator | 2025-05-30 01:14:01 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:01.803707 | orchestrator | 2025-05-30 01:14:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:04.845497 | orchestrator | 2025-05-30 01:14:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:04.846755 | orchestrator | 2025-05-30 01:14:04 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:04.846786 | orchestrator | 2025-05-30 01:14:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:07.889696 | orchestrator | 2025-05-30 01:14:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:07.891111 | orchestrator | 2025-05-30 01:14:07 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:07.891157 | orchestrator | 2025-05-30 01:14:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:10.937938 | orchestrator | 2025-05-30 01:14:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:10.938328 | orchestrator | 2025-05-30 01:14:10 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:10.938366 | orchestrator | 2025-05-30 01:14:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:13.991965 | orchestrator | 2025-05-30 01:14:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:13.992071 | orchestrator | 2025-05-30 01:14:13 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:13.992086 | orchestrator | 2025-05-30 01:14:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:17.036236 | orchestrator | 2025-05-30 01:14:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:17.036310 | orchestrator | 2025-05-30 01:14:17 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:17.036316 | orchestrator | 2025-05-30 01:14:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:20.077333 | orchestrator | 2025-05-30 01:14:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:20.078825 | orchestrator | 2025-05-30 01:14:20 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:20.078916 | orchestrator | 2025-05-30 01:14:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:23.130573 | orchestrator | 2025-05-30 01:14:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:23.132626 | orchestrator | 2025-05-30 01:14:23 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:23.132661 | orchestrator | 2025-05-30 01:14:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:26.188840 | orchestrator | 2025-05-30 01:14:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:26.189133 | orchestrator | 2025-05-30 01:14:26 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:26.191183 | orchestrator | 2025-05-30 01:14:26 | INFO  | Task 054ee7b5-20f8-4bb3-9d21-2d00a0582583 is in state STARTED 2025-05-30 01:14:26.191222 | orchestrator | 2025-05-30 01:14:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:29.258296 | orchestrator | 2025-05-30 01:14:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:29.258454 | orchestrator | 2025-05-30 01:14:29 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:29.260363 | orchestrator | 2025-05-30 01:14:29 | INFO  | Task 054ee7b5-20f8-4bb3-9d21-2d00a0582583 is in state STARTED 2025-05-30 01:14:29.261030 | orchestrator | 2025-05-30 01:14:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:32.329204 | orchestrator | 2025-05-30 01:14:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:32.331034 | orchestrator | 2025-05-30 01:14:32 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:32.333615 | orchestrator | 2025-05-30 01:14:32 | INFO  | Task 054ee7b5-20f8-4bb3-9d21-2d00a0582583 is in state STARTED 2025-05-30 01:14:32.333658 | orchestrator | 2025-05-30 01:14:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:35.390235 | orchestrator | 2025-05-30 01:14:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:35.390617 | orchestrator | 2025-05-30 01:14:35 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:35.391498 | orchestrator | 2025-05-30 01:14:35 | INFO  | Task 054ee7b5-20f8-4bb3-9d21-2d00a0582583 is in state STARTED 2025-05-30 01:14:35.391526 | orchestrator | 2025-05-30 01:14:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:38.435838 | orchestrator | 2025-05-30 01:14:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:38.435992 | orchestrator | 2025-05-30 01:14:38 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:38.436008 | orchestrator | 2025-05-30 01:14:38 | INFO  | Task 054ee7b5-20f8-4bb3-9d21-2d00a0582583 is in state SUCCESS 2025-05-30 01:14:38.436021 | orchestrator | 2025-05-30 01:14:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:41.487151 | orchestrator | 2025-05-30 01:14:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:41.488453 | orchestrator | 2025-05-30 01:14:41 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:41.488582 | orchestrator | 2025-05-30 01:14:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:44.545170 | orchestrator | 2025-05-30 01:14:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:44.545497 | orchestrator | 2025-05-30 01:14:44 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:44.545529 | orchestrator | 2025-05-30 01:14:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:47.603568 | orchestrator | 2025-05-30 01:14:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:47.606213 | orchestrator | 2025-05-30 01:14:47 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:47.606258 | orchestrator | 2025-05-30 01:14:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:50.671543 | orchestrator | 2025-05-30 01:14:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:50.672880 | orchestrator | 2025-05-30 01:14:50 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:50.672969 | orchestrator | 2025-05-30 01:14:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:53.733031 | orchestrator | 2025-05-30 01:14:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:53.734973 | orchestrator | 2025-05-30 01:14:53 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:53.735012 | orchestrator | 2025-05-30 01:14:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:56.797752 | orchestrator | 2025-05-30 01:14:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:56.799103 | orchestrator | 2025-05-30 01:14:56 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:56.799344 | orchestrator | 2025-05-30 01:14:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:14:59.865482 | orchestrator | 2025-05-30 01:14:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:14:59.865594 | orchestrator | 2025-05-30 01:14:59 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:14:59.865610 | orchestrator | 2025-05-30 01:14:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:02.907015 | orchestrator | 2025-05-30 01:15:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:02.907145 | orchestrator | 2025-05-30 01:15:02 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:15:02.907161 | orchestrator | 2025-05-30 01:15:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:05.933733 | orchestrator | 2025-05-30 01:15:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:05.933913 | orchestrator | 2025-05-30 01:15:05 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:15:05.934003 | orchestrator | 2025-05-30 01:15:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:08.984934 | orchestrator | 2025-05-30 01:15:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:08.985210 | orchestrator | 2025-05-30 01:15:08 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:15:08.985366 | orchestrator | 2025-05-30 01:15:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:12.038414 | orchestrator | 2025-05-30 01:15:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:12.039451 | orchestrator | 2025-05-30 01:15:12 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:15:12.039513 | orchestrator | 2025-05-30 01:15:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:15.083723 | orchestrator | 2025-05-30 01:15:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:15.087052 | orchestrator | 2025-05-30 01:15:15 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:15:15.087117 | orchestrator | 2025-05-30 01:15:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:18.132288 | orchestrator | 2025-05-30 01:15:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:18.133584 | orchestrator | 2025-05-30 01:15:18 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:15:18.133637 | orchestrator | 2025-05-30 01:15:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:21.186436 | orchestrator | 2025-05-30 01:15:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:21.187732 | orchestrator | 2025-05-30 01:15:21 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:15:21.187924 | orchestrator | 2025-05-30 01:15:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:24.246211 | orchestrator | 2025-05-30 01:15:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:24.248164 | orchestrator | 2025-05-30 01:15:24 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:15:24.248404 | orchestrator | 2025-05-30 01:15:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:27.295312 | orchestrator | 2025-05-30 01:15:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:27.297930 | orchestrator | 2025-05-30 01:15:27 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:15:27.297982 | orchestrator | 2025-05-30 01:15:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:30.352292 | orchestrator | 2025-05-30 01:15:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:30.352800 | orchestrator | 2025-05-30 01:15:30 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:15:30.352913 | orchestrator | 2025-05-30 01:15:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:33.412152 | orchestrator | 2025-05-30 01:15:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:33.413446 | orchestrator | 2025-05-30 01:15:33 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:15:33.413669 | orchestrator | 2025-05-30 01:15:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:36.476024 | orchestrator | 2025-05-30 01:15:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:36.477094 | orchestrator | 2025-05-30 01:15:36 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:15:36.477137 | orchestrator | 2025-05-30 01:15:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:39.521541 | orchestrator | 2025-05-30 01:15:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:39.523158 | orchestrator | 2025-05-30 01:15:39 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state STARTED 2025-05-30 01:15:39.523199 | orchestrator | 2025-05-30 01:15:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:42.570771 | orchestrator | 2025-05-30 01:15:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:42.573908 | orchestrator | 2025-05-30 01:15:42 | INFO  | Task ea14c0d2-b0b0-49bd-9a63-b9e8bf736473 is in state SUCCESS 2025-05-30 01:15:42.575314 | orchestrator | 2025-05-30 01:15:42.575355 | orchestrator | 2025-05-30 01:15:42.575368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:15:42.575379 | orchestrator | 2025-05-30 01:15:42.575390 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:15:42.575401 | orchestrator | Friday 30 May 2025 01:08:51 +0000 (0:00:00.242) 0:00:00.242 ************ 2025-05-30 01:15:42.575412 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:15:42.575424 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:15:42.575435 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:15:42.575445 | orchestrator | 2025-05-30 01:15:42.575456 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:15:42.575467 | orchestrator | Friday 30 May 2025 01:08:51 +0000 (0:00:00.503) 0:00:00.745 ************ 2025-05-30 01:15:42.575478 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-30 01:15:42.575489 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-30 01:15:42.575499 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-30 01:15:42.575510 | orchestrator | 2025-05-30 01:15:42.575521 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-30 01:15:42.575531 | orchestrator | 2025-05-30 01:15:42.575542 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-30 01:15:42.575553 | orchestrator | Friday 30 May 2025 01:08:52 +0000 (0:00:00.888) 0:00:01.634 ************ 2025-05-30 01:15:42.575563 | orchestrator | 2025-05-30 01:15:42.575574 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-05-30 01:15:42.575636 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:15:42.575647 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:15:42.575658 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:15:42.575669 | orchestrator | 2025-05-30 01:15:42.575680 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:15:42.575691 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:15:42.575757 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:15:42.575769 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:15:42.575780 | orchestrator | 2025-05-30 01:15:42.575791 | orchestrator | 2025-05-30 01:15:42.575802 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:15:42.575813 | orchestrator | Friday 30 May 2025 01:11:27 +0000 (0:02:34.888) 0:02:36.523 ************ 2025-05-30 01:15:42.575885 | orchestrator | =============================================================================== 2025-05-30 01:15:42.575899 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 154.89s 2025-05-30 01:15:42.576031 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2025-05-30 01:15:42.576047 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.50s 2025-05-30 01:15:42.576059 | orchestrator | 2025-05-30 01:15:42.576073 | orchestrator | None 2025-05-30 01:15:42.576085 | orchestrator | 2025-05-30 01:15:42.576098 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-30 01:15:42.576110 | orchestrator | 2025-05-30 01:15:42.576136 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-30 01:15:42.576148 | orchestrator | Friday 30 May 2025 01:07:33 +0000 (0:00:00.669) 0:00:00.669 ************ 2025-05-30 01:15:42.576176 | orchestrator | changed: [testbed-manager] 2025-05-30 01:15:42.576189 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.576201 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:15:42.576213 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:15:42.576225 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:15:42.576256 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:15:42.576267 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:15:42.576278 | orchestrator | 2025-05-30 01:15:42.576289 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-30 01:15:42.576300 | orchestrator | Friday 30 May 2025 01:07:35 +0000 (0:00:01.974) 0:00:02.644 ************ 2025-05-30 01:15:42.576310 | orchestrator | changed: [testbed-manager] 2025-05-30 01:15:42.576321 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.576331 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:15:42.576356 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:15:42.576367 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:15:42.576377 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:15:42.576388 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:15:42.576399 | orchestrator | 2025-05-30 01:15:42.576410 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-30 01:15:42.576420 | orchestrator | Friday 30 May 2025 01:07:37 +0000 (0:00:02.076) 0:00:04.720 ************ 2025-05-30 01:15:42.576432 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-30 01:15:42.576443 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-30 01:15:42.576454 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-30 01:15:42.576464 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-30 01:15:42.576475 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-30 01:15:42.576486 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-30 01:15:42.576496 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-30 01:15:42.576507 | orchestrator | 2025-05-30 01:15:42.576518 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-30 01:15:42.576528 | orchestrator | 2025-05-30 01:15:42.576539 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-30 01:15:42.576550 | orchestrator | Friday 30 May 2025 01:07:39 +0000 (0:00:02.673) 0:00:07.394 ************ 2025-05-30 01:15:42.576578 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:15:42.576589 | orchestrator | 2025-05-30 01:15:42.576600 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-30 01:15:42.576624 | orchestrator | Friday 30 May 2025 01:07:41 +0000 (0:00:01.520) 0:00:08.914 ************ 2025-05-30 01:15:42.576636 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-30 01:15:42.576647 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-30 01:15:42.576658 | orchestrator | 2025-05-30 01:15:42.576668 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-30 01:15:42.576679 | orchestrator | Friday 30 May 2025 01:07:45 +0000 (0:00:04.271) 0:00:13.186 ************ 2025-05-30 01:15:42.576690 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-30 01:15:42.576701 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-30 01:15:42.576712 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.576722 | orchestrator | 2025-05-30 01:15:42.576733 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-30 01:15:42.576744 | orchestrator | Friday 30 May 2025 01:07:50 +0000 (0:00:04.419) 0:00:17.605 ************ 2025-05-30 01:15:42.576755 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.576765 | orchestrator | 2025-05-30 01:15:42.576776 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-30 01:15:42.576787 | orchestrator | Friday 30 May 2025 01:07:51 +0000 (0:00:01.025) 0:00:18.631 ************ 2025-05-30 01:15:42.576798 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.576809 | orchestrator | 2025-05-30 01:15:42.576819 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-30 01:15:42.576860 | orchestrator | Friday 30 May 2025 01:07:53 +0000 (0:00:02.557) 0:00:21.189 ************ 2025-05-30 01:15:42.576933 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.576957 | orchestrator | 2025-05-30 01:15:42.576968 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-30 01:15:42.577103 | orchestrator | Friday 30 May 2025 01:07:57 +0000 (0:00:03.663) 0:00:24.853 ************ 2025-05-30 01:15:42.577123 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.577141 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.577158 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.577177 | orchestrator | 2025-05-30 01:15:42.577197 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-30 01:15:42.577217 | orchestrator | Friday 30 May 2025 01:07:57 +0000 (0:00:00.587) 0:00:25.441 ************ 2025-05-30 01:15:42.577234 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:15:42.577246 | orchestrator | 2025-05-30 01:15:42.577257 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-30 01:15:42.577267 | orchestrator | Friday 30 May 2025 01:08:27 +0000 (0:00:30.029) 0:00:55.470 ************ 2025-05-30 01:15:42.577278 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.577289 | orchestrator | 2025-05-30 01:15:42.577299 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-30 01:15:42.577310 | orchestrator | Friday 30 May 2025 01:08:39 +0000 (0:00:11.934) 0:01:07.404 ************ 2025-05-30 01:15:42.577321 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:15:42.577332 | orchestrator | 2025-05-30 01:15:42.577342 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-30 01:15:42.577353 | orchestrator | Friday 30 May 2025 01:08:50 +0000 (0:00:10.275) 0:01:17.679 ************ 2025-05-30 01:15:42.577364 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:15:42.577374 | orchestrator | 2025-05-30 01:15:42.577385 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-30 01:15:42.577396 | orchestrator | Friday 30 May 2025 01:08:51 +0000 (0:00:01.635) 0:01:19.315 ************ 2025-05-30 01:15:42.577406 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.577417 | orchestrator | 2025-05-30 01:15:42.577427 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-30 01:15:42.577438 | orchestrator | Friday 30 May 2025 01:08:52 +0000 (0:00:00.805) 0:01:20.121 ************ 2025-05-30 01:15:42.577449 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:15:42.577460 | orchestrator | 2025-05-30 01:15:42.577471 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-30 01:15:42.577481 | orchestrator | Friday 30 May 2025 01:08:53 +0000 (0:00:00.764) 0:01:20.885 ************ 2025-05-30 01:15:42.577492 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:15:42.577502 | orchestrator | 2025-05-30 01:15:42.577513 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-30 01:15:42.577532 | orchestrator | Friday 30 May 2025 01:09:07 +0000 (0:00:14.584) 0:01:35.470 ************ 2025-05-30 01:15:42.577543 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.577554 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.577565 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.577576 | orchestrator | 2025-05-30 01:15:42.577586 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-30 01:15:42.577597 | orchestrator | 2025-05-30 01:15:42.577608 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-30 01:15:42.577619 | orchestrator | Friday 30 May 2025 01:09:08 +0000 (0:00:00.363) 0:01:35.833 ************ 2025-05-30 01:15:42.577629 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:15:42.577640 | orchestrator | 2025-05-30 01:15:42.577651 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-30 01:15:42.577662 | orchestrator | Friday 30 May 2025 01:09:08 +0000 (0:00:00.720) 0:01:36.554 ************ 2025-05-30 01:15:42.577672 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.577683 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.577704 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.577734 | orchestrator | 2025-05-30 01:15:42.577745 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-30 01:15:42.577768 | orchestrator | Friday 30 May 2025 01:09:11 +0000 (0:00:02.147) 0:01:38.702 ************ 2025-05-30 01:15:42.577780 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.577791 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.577813 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.577852 | orchestrator | 2025-05-30 01:15:42.577864 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-30 01:15:42.577914 | orchestrator | Friday 30 May 2025 01:09:13 +0000 (0:00:02.146) 0:01:40.848 ************ 2025-05-30 01:15:42.577926 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.577937 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.577947 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.577958 | orchestrator | 2025-05-30 01:15:42.578068 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-30 01:15:42.578083 | orchestrator | Friday 30 May 2025 01:09:13 +0000 (0:00:00.403) 0:01:41.252 ************ 2025-05-30 01:15:42.578094 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-30 01:15:42.578106 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.578117 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-30 01:15:42.578128 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.578138 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-30 01:15:42.578149 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-30 01:15:42.578160 | orchestrator | 2025-05-30 01:15:42.578171 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-30 01:15:42.578182 | orchestrator | Friday 30 May 2025 01:09:21 +0000 (0:00:08.197) 0:01:49.450 ************ 2025-05-30 01:15:42.578193 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.578203 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.578214 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.578225 | orchestrator | 2025-05-30 01:15:42.578235 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-30 01:15:42.578246 | orchestrator | Friday 30 May 2025 01:09:22 +0000 (0:00:00.488) 0:01:49.939 ************ 2025-05-30 01:15:42.578257 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-30 01:15:42.578268 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.578278 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-30 01:15:42.578289 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.578300 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-30 01:15:42.578311 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.578321 | orchestrator | 2025-05-30 01:15:42.578332 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-30 01:15:42.578343 | orchestrator | Friday 30 May 2025 01:09:23 +0000 (0:00:00.932) 0:01:50.871 ************ 2025-05-30 01:15:42.578354 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.578364 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.578375 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.578386 | orchestrator | 2025-05-30 01:15:42.578396 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-30 01:15:42.578407 | orchestrator | Friday 30 May 2025 01:09:23 +0000 (0:00:00.439) 0:01:51.311 ************ 2025-05-30 01:15:42.578418 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.578428 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.578439 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.578450 | orchestrator | 2025-05-30 01:15:42.578461 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-30 01:15:42.578471 | orchestrator | Friday 30 May 2025 01:09:24 +0000 (0:00:00.963) 0:01:52.274 ************ 2025-05-30 01:15:42.578482 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.578493 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.578512 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.578523 | orchestrator | 2025-05-30 01:15:42.578533 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-30 01:15:42.578544 | orchestrator | Friday 30 May 2025 01:09:26 +0000 (0:00:02.173) 0:01:54.448 ************ 2025-05-30 01:15:42.578555 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.578566 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.578577 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:15:42.578587 | orchestrator | 2025-05-30 01:15:42.578598 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-30 01:15:42.578609 | orchestrator | Friday 30 May 2025 01:09:45 +0000 (0:00:18.814) 0:02:13.262 ************ 2025-05-30 01:15:42.578620 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.578631 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.578641 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:15:42.578652 | orchestrator | 2025-05-30 01:15:42.578662 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-30 01:15:42.578673 | orchestrator | Friday 30 May 2025 01:09:56 +0000 (0:00:10.317) 0:02:23.580 ************ 2025-05-30 01:15:42.578684 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:15:42.578701 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.578712 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.578723 | orchestrator | 2025-05-30 01:15:42.578733 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-30 01:15:42.578744 | orchestrator | Friday 30 May 2025 01:09:57 +0000 (0:00:01.165) 0:02:24.745 ************ 2025-05-30 01:15:42.578755 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.578765 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.578776 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.578787 | orchestrator | 2025-05-30 01:15:42.578798 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-30 01:15:42.578808 | orchestrator | Friday 30 May 2025 01:10:07 +0000 (0:00:10.244) 0:02:34.990 ************ 2025-05-30 01:15:42.578819 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.578854 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.578865 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.578876 | orchestrator | 2025-05-30 01:15:42.578887 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-30 01:15:42.578898 | orchestrator | Friday 30 May 2025 01:10:08 +0000 (0:00:01.510) 0:02:36.500 ************ 2025-05-30 01:15:42.578909 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.578919 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.578930 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.578941 | orchestrator | 2025-05-30 01:15:42.578951 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-30 01:15:42.578962 | orchestrator | 2025-05-30 01:15:42.578973 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-30 01:15:42.578984 | orchestrator | Friday 30 May 2025 01:10:09 +0000 (0:00:00.452) 0:02:36.953 ************ 2025-05-30 01:15:42.579002 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:15:42.579053 | orchestrator | 2025-05-30 01:15:42.579064 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-30 01:15:42.579074 | orchestrator | Friday 30 May 2025 01:10:09 +0000 (0:00:00.601) 0:02:37.554 ************ 2025-05-30 01:15:42.579085 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-30 01:15:42.579096 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-30 01:15:42.579107 | orchestrator | 2025-05-30 01:15:42.579118 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-30 01:15:42.579128 | orchestrator | Friday 30 May 2025 01:10:13 +0000 (0:00:03.294) 0:02:40.849 ************ 2025-05-30 01:15:42.579139 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-30 01:15:42.579160 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-30 01:15:42.579171 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-30 01:15:42.579182 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-30 01:15:42.579192 | orchestrator | 2025-05-30 01:15:42.579203 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-30 01:15:42.579214 | orchestrator | Friday 30 May 2025 01:10:19 +0000 (0:00:06.244) 0:02:47.093 ************ 2025-05-30 01:15:42.579225 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-30 01:15:42.579236 | orchestrator | 2025-05-30 01:15:42.579246 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-30 01:15:42.579257 | orchestrator | Friday 30 May 2025 01:10:22 +0000 (0:00:03.075) 0:02:50.168 ************ 2025-05-30 01:15:42.579268 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-30 01:15:42.579279 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-30 01:15:42.579290 | orchestrator | 2025-05-30 01:15:42.579300 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-30 01:15:42.579311 | orchestrator | Friday 30 May 2025 01:10:26 +0000 (0:00:03.849) 0:02:54.018 ************ 2025-05-30 01:15:42.579322 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-30 01:15:42.579332 | orchestrator | 2025-05-30 01:15:42.579343 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-30 01:15:42.579354 | orchestrator | Friday 30 May 2025 01:10:29 +0000 (0:00:03.352) 0:02:57.371 ************ 2025-05-30 01:15:42.579365 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-30 01:15:42.579375 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-30 01:15:42.579386 | orchestrator | 2025-05-30 01:15:42.579397 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-30 01:15:42.579407 | orchestrator | Friday 30 May 2025 01:10:38 +0000 (0:00:08.268) 0:03:05.640 ************ 2025-05-30 01:15:42.579430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.579469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.579490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.579503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.579523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.579536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.579555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.579574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.579585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.579597 | orchestrator | 2025-05-30 01:15:42.579608 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-30 01:15:42.579619 | orchestrator | Friday 30 May 2025 01:10:39 +0000 (0:00:01.434) 0:03:07.075 ************ 2025-05-30 01:15:42.579630 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.579641 | orchestrator | 2025-05-30 01:15:42.579651 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-30 01:15:42.579662 | orchestrator | Friday 30 May 2025 01:10:39 +0000 (0:00:00.298) 0:03:07.373 ************ 2025-05-30 01:15:42.579781 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.579794 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.579805 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.579816 | orchestrator | 2025-05-30 01:15:42.579862 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-30 01:15:42.579875 | orchestrator | Friday 30 May 2025 01:10:40 +0000 (0:00:00.297) 0:03:07.670 ************ 2025-05-30 01:15:42.579886 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-30 01:15:42.579897 | orchestrator | 2025-05-30 01:15:42.579908 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-30 01:15:42.579918 | orchestrator | Friday 30 May 2025 01:10:40 +0000 (0:00:00.559) 0:03:08.230 ************ 2025-05-30 01:15:42.579929 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.579939 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.579950 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.579961 | orchestrator | 2025-05-30 01:15:42.579972 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-30 01:15:42.579982 | orchestrator | Friday 30 May 2025 01:10:40 +0000 (0:00:00.281) 0:03:08.512 ************ 2025-05-30 01:15:42.579993 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:15:42.580004 | orchestrator | 2025-05-30 01:15:42.580014 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-30 01:15:42.580025 | orchestrator | Friday 30 May 2025 01:10:41 +0000 (0:00:00.825) 0:03:09.337 ************ 2025-05-30 01:15:42.580044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.580081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.580095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.580113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.580132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.580149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.580161 | orchestrator | 2025-05-30 01:15:42.580172 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-30 01:15:42.580183 | orchestrator | Friday 30 May 2025 01:10:44 +0000 (0:00:02.671) 0:03:12.008 ************ 2025-05-30 01:15:42.580194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-30 01:15:42.580207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.580218 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.580235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-30 01:15:42.580253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.580265 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.580283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-30 01:15:42.580295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.580307 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.580317 | orchestrator | 2025-05-30 01:15:42.580328 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-30 01:15:42.580339 | orchestrator | Friday 30 May 2025 01:10:45 +0000 (0:00:00.633) 0:03:12.641 ************ 2025-05-30 01:15:42.580355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-30 01:15:42.580375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.580386 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.580409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-30 01:15:42.580422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.580433 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.580444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-30 01:15:42.580470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.580481 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.580492 | orchestrator | 2025-05-30 01:15:42.580503 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-30 01:15:42.580514 | orchestrator | Friday 30 May 2025 01:10:46 +0000 (0:00:01.188) 0:03:13.830 ************ 2025-05-30 01:15:42.580535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.580548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.580572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.580584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.580656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.580670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.580682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.580693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.580712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.580724 | orchestrator | 2025-05-30 01:15:42.580735 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-30 01:15:42.580751 | orchestrator | Friday 30 May 2025 01:10:49 +0000 (0:00:02.865) 0:03:16.695 ************ 2025-05-30 01:15:42.580770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.580783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.580796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.580819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.580861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.580892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.580912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.580932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.580955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.580966 | orchestrator | 2025-05-30 01:15:42.580977 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-30 01:15:42.580988 | orchestrator | Friday 30 May 2025 01:10:55 +0000 (0:00:06.462) 0:03:23.157 ************ 2025-05-30 01:15:42.581005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-30 01:15:42.581024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.581036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.581047 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.581059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-30 01:15:42.581077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.581093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.581105 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.581124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-30 01:15:42.581137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.581155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.581166 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.581177 | orchestrator | 2025-05-30 01:15:42.581187 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-30 01:15:42.581198 | orchestrator | Friday 30 May 2025 01:10:56 +0000 (0:00:00.827) 0:03:23.985 ************ 2025-05-30 01:15:42.581209 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.581220 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:15:42.581230 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:15:42.581241 | orchestrator | 2025-05-30 01:15:42.581252 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-30 01:15:42.581263 | orchestrator | Friday 30 May 2025 01:10:58 +0000 (0:00:01.725) 0:03:25.710 ************ 2025-05-30 01:15:42.581274 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.581284 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.581295 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.581305 | orchestrator | 2025-05-30 01:15:42.581316 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-30 01:15:42.581327 | orchestrator | Friday 30 May 2025 01:10:58 +0000 (0:00:00.521) 0:03:26.232 ************ 2025-05-30 01:15:42.581343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.581364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.581383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-30 01:15:42.581396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.581413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.581424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.581442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.581460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.581471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.581482 | orchestrator | 2025-05-30 01:15:42.581493 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-30 01:15:42.581504 | orchestrator | Friday 30 May 2025 01:11:00 +0000 (0:00:01.973) 0:03:28.205 ************ 2025-05-30 01:15:42.581515 | orchestrator | 2025-05-30 01:15:42.581526 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-30 01:15:42.581537 | orchestrator | Friday 30 May 2025 01:11:00 +0000 (0:00:00.320) 0:03:28.526 ************ 2025-05-30 01:15:42.581547 | orchestrator | 2025-05-30 01:15:42.581558 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-30 01:15:42.581568 | orchestrator | Friday 30 May 2025 01:11:01 +0000 (0:00:00.115) 0:03:28.641 ************ 2025-05-30 01:15:42.581579 | orchestrator | 2025-05-30 01:15:42.581590 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-30 01:15:42.581600 | orchestrator | Friday 30 May 2025 01:11:01 +0000 (0:00:00.314) 0:03:28.956 ************ 2025-05-30 01:15:42.581618 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.581644 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:15:42.581664 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:15:42.581680 | orchestrator | 2025-05-30 01:15:42.581697 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-30 01:15:42.581715 | orchestrator | Friday 30 May 2025 01:11:18 +0000 (0:00:16.771) 0:03:45.727 ************ 2025-05-30 01:15:42.581730 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.581745 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:15:42.581761 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:15:42.581777 | orchestrator | 2025-05-30 01:15:42.581792 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-30 01:15:42.581807 | orchestrator | 2025-05-30 01:15:42.581885 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-30 01:15:42.581908 | orchestrator | Friday 30 May 2025 01:11:24 +0000 (0:00:05.952) 0:03:51.680 ************ 2025-05-30 01:15:42.581938 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:15:42.581959 | orchestrator | 2025-05-30 01:15:42.581970 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-30 01:15:42.581981 | orchestrator | Friday 30 May 2025 01:11:25 +0000 (0:00:01.476) 0:03:53.156 ************ 2025-05-30 01:15:42.581992 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.582002 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.582013 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.582075 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.582087 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.582109 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.582119 | orchestrator | 2025-05-30 01:15:42.582130 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-30 01:15:42.582141 | orchestrator | Friday 30 May 2025 01:11:26 +0000 (0:00:00.754) 0:03:53.911 ************ 2025-05-30 01:15:42.582152 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.582162 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.582172 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.582182 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 01:15:42.582191 | orchestrator | 2025-05-30 01:15:42.582201 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-30 01:15:42.582211 | orchestrator | Friday 30 May 2025 01:11:27 +0000 (0:00:01.112) 0:03:55.023 ************ 2025-05-30 01:15:42.582221 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-30 01:15:42.582231 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-30 01:15:42.582240 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-30 01:15:42.582250 | orchestrator | 2025-05-30 01:15:42.582269 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-30 01:15:42.582279 | orchestrator | Friday 30 May 2025 01:11:28 +0000 (0:00:00.891) 0:03:55.914 ************ 2025-05-30 01:15:42.582289 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-30 01:15:42.582299 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-30 01:15:42.582308 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-30 01:15:42.582318 | orchestrator | 2025-05-30 01:15:42.582327 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-30 01:15:42.582337 | orchestrator | Friday 30 May 2025 01:11:29 +0000 (0:00:01.376) 0:03:57.291 ************ 2025-05-30 01:15:42.582346 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-30 01:15:42.582356 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.582366 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-30 01:15:42.582375 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.582385 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-30 01:15:42.582402 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.582418 | orchestrator | 2025-05-30 01:15:42.582434 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-30 01:15:42.582449 | orchestrator | Friday 30 May 2025 01:11:30 +0000 (0:00:00.632) 0:03:57.924 ************ 2025-05-30 01:15:42.582466 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-30 01:15:42.582480 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-30 01:15:42.582496 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.582513 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-30 01:15:42.582531 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-30 01:15:42.582548 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-30 01:15:42.582565 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-30 01:15:42.582575 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-30 01:15:42.582585 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.582594 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-30 01:15:42.582604 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-30 01:15:42.582613 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.582623 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-30 01:15:42.582632 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-30 01:15:42.582651 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-30 01:15:42.582660 | orchestrator | 2025-05-30 01:15:42.582670 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-30 01:15:42.582680 | orchestrator | Friday 30 May 2025 01:11:31 +0000 (0:00:01.249) 0:03:59.174 ************ 2025-05-30 01:15:42.582690 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.582706 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.582723 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.582738 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:15:42.582752 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:15:42.582761 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:15:42.582770 | orchestrator | 2025-05-30 01:15:42.582780 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-30 01:15:42.582790 | orchestrator | Friday 30 May 2025 01:11:32 +0000 (0:00:01.179) 0:04:00.353 ************ 2025-05-30 01:15:42.582799 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.582808 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.582818 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.582863 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:15:42.582879 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:15:42.582895 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:15:42.582911 | orchestrator | 2025-05-30 01:15:42.582927 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-30 01:15:42.582951 | orchestrator | Friday 30 May 2025 01:11:34 +0000 (0:00:01.827) 0:04:02.181 ************ 2025-05-30 01:15:42.582970 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-30 01:15:42.583013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.583034 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-30 01:15:42.583062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.583088 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-30 01:15:42.583105 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-30 01:15:42.583133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.583151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.583200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.583234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.583251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.583276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.583293 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-30 01:15:42.583322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.583340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.583369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.583387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.583405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.583429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.583446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.583494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.583515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.583543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-30 01:15:42.583561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.583583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.583600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.583615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.583662 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.583693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.584033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.584044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.584168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.584182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584293 | orchestrator | 2025-05-30 01:15:42.584302 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-30 01:15:42.584310 | orchestrator | Friday 30 May 2025 01:11:37 +0000 (0:00:02.528) 0:04:04.709 ************ 2025-05-30 01:15:42.584318 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-30 01:15:42.584327 | orchestrator | 2025-05-30 01:15:42.584335 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-30 01:15:42.584343 | orchestrator | Friday 30 May 2025 01:11:38 +0000 (0:00:01.529) 0:04:06.239 ************ 2025-05-30 01:15:42.584352 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584379 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584402 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584506 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.584563 | orchestrator | 2025-05-30 01:15:42.584571 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-30 01:15:42.584584 | orchestrator | Friday 30 May 2025 01:11:42 +0000 (0:00:03.904) 0:04:10.144 ************ 2025-05-30 01:15:42.584598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.584612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.584626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584640 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.584660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.584684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.584699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584713 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.584726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.584740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.584760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584784 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.584798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.584819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584852 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.584867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.584881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584895 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.584908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.584922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.584945 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.584959 | orchestrator | 2025-05-30 01:15:42.584977 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-30 01:15:42.584992 | orchestrator | Friday 30 May 2025 01:11:44 +0000 (0:00:01.785) 0:04:11.930 ************ 2025-05-30 01:15:42.585006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.585028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.585042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.585056 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.585069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.585083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.585137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.585152 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.585174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.585189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.585203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.585217 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.585230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.585258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.585272 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.585286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.585305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.585317 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.585329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.585343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.585356 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.585368 | orchestrator | 2025-05-30 01:15:42.585380 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-30 01:15:42.585391 | orchestrator | Friday 30 May 2025 01:11:47 +0000 (0:00:02.703) 0:04:14.633 ************ 2025-05-30 01:15:42.585402 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.585413 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.585425 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.585436 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-30 01:15:42.585455 | orchestrator | 2025-05-30 01:15:42.585467 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-30 01:15:42.585478 | orchestrator | Friday 30 May 2025 01:11:48 +0000 (0:00:01.155) 0:04:15.789 ************ 2025-05-30 01:15:42.585489 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-30 01:15:42.585501 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-30 01:15:42.585512 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-30 01:15:42.585524 | orchestrator | 2025-05-30 01:15:42.585537 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-30 01:15:42.585550 | orchestrator | Friday 30 May 2025 01:11:49 +0000 (0:00:00.849) 0:04:16.638 ************ 2025-05-30 01:15:42.585563 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-30 01:15:42.585577 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-30 01:15:42.585590 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-30 01:15:42.585604 | orchestrator | 2025-05-30 01:15:42.585617 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-30 01:15:42.585631 | orchestrator | Friday 30 May 2025 01:11:49 +0000 (0:00:00.809) 0:04:17.448 ************ 2025-05-30 01:15:42.585645 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:15:42.585657 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:15:42.585669 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:15:42.585681 | orchestrator | 2025-05-30 01:15:42.585694 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-30 01:15:42.585716 | orchestrator | Friday 30 May 2025 01:11:50 +0000 (0:00:00.681) 0:04:18.129 ************ 2025-05-30 01:15:42.585728 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:15:42.585739 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:15:42.585751 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:15:42.585763 | orchestrator | 2025-05-30 01:15:42.585776 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-30 01:15:42.585789 | orchestrator | Friday 30 May 2025 01:11:51 +0000 (0:00:00.496) 0:04:18.626 ************ 2025-05-30 01:15:42.585803 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-30 01:15:42.585817 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-30 01:15:42.585850 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-30 01:15:42.585864 | orchestrator | 2025-05-30 01:15:42.585874 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-30 01:15:42.585887 | orchestrator | Friday 30 May 2025 01:11:52 +0000 (0:00:01.402) 0:04:20.028 ************ 2025-05-30 01:15:42.585900 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-30 01:15:42.585913 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-30 01:15:42.585925 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-30 01:15:42.585938 | orchestrator | 2025-05-30 01:15:42.585951 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-30 01:15:42.585964 | orchestrator | Friday 30 May 2025 01:11:53 +0000 (0:00:01.382) 0:04:21.411 ************ 2025-05-30 01:15:42.585976 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-30 01:15:42.585990 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-30 01:15:42.586003 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-30 01:15:42.586433 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-30 01:15:42.586454 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-30 01:15:42.586462 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-30 01:15:42.586470 | orchestrator | 2025-05-30 01:15:42.586478 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-30 01:15:42.586486 | orchestrator | Friday 30 May 2025 01:11:59 +0000 (0:00:05.232) 0:04:26.644 ************ 2025-05-30 01:15:42.586494 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.586502 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.586521 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.586529 | orchestrator | 2025-05-30 01:15:42.586537 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-30 01:15:42.586544 | orchestrator | Friday 30 May 2025 01:11:59 +0000 (0:00:00.480) 0:04:27.124 ************ 2025-05-30 01:15:42.586552 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.586560 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.586568 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.586576 | orchestrator | 2025-05-30 01:15:42.586583 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-30 01:15:42.586591 | orchestrator | Friday 30 May 2025 01:12:00 +0000 (0:00:00.485) 0:04:27.610 ************ 2025-05-30 01:15:42.586599 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:15:42.586607 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:15:42.586615 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:15:42.586622 | orchestrator | 2025-05-30 01:15:42.586630 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-30 01:15:42.586638 | orchestrator | Friday 30 May 2025 01:12:01 +0000 (0:00:01.357) 0:04:28.967 ************ 2025-05-30 01:15:42.586646 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-30 01:15:42.586655 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-30 01:15:42.586663 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-30 01:15:42.586671 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-30 01:15:42.586679 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-30 01:15:42.586687 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-30 01:15:42.586695 | orchestrator | 2025-05-30 01:15:42.586703 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-30 01:15:42.586710 | orchestrator | Friday 30 May 2025 01:12:04 +0000 (0:00:03.485) 0:04:32.453 ************ 2025-05-30 01:15:42.586718 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-30 01:15:42.586726 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-30 01:15:42.586734 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-30 01:15:42.586742 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-30 01:15:42.586750 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:15:42.586758 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-30 01:15:42.586767 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:15:42.586780 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-30 01:15:42.586793 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:15:42.586805 | orchestrator | 2025-05-30 01:15:42.586818 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-30 01:15:42.586976 | orchestrator | Friday 30 May 2025 01:12:08 +0000 (0:00:03.509) 0:04:35.962 ************ 2025-05-30 01:15:42.586990 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.586998 | orchestrator | 2025-05-30 01:15:42.587006 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-30 01:15:42.587022 | orchestrator | Friday 30 May 2025 01:12:08 +0000 (0:00:00.125) 0:04:36.087 ************ 2025-05-30 01:15:42.587030 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.587038 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.587046 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.587055 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.587064 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.587082 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.587091 | orchestrator | 2025-05-30 01:15:42.587101 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-30 01:15:42.587110 | orchestrator | Friday 30 May 2025 01:12:09 +0000 (0:00:00.917) 0:04:37.005 ************ 2025-05-30 01:15:42.587118 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-30 01:15:42.587128 | orchestrator | 2025-05-30 01:15:42.587137 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-30 01:15:42.587146 | orchestrator | Friday 30 May 2025 01:12:09 +0000 (0:00:00.374) 0:04:37.379 ************ 2025-05-30 01:15:42.587155 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.587163 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.587172 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.587181 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.587190 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.587199 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.587208 | orchestrator | 2025-05-30 01:15:42.587217 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-30 01:15:42.587227 | orchestrator | Friday 30 May 2025 01:12:10 +0000 (0:00:00.753) 0:04:38.132 ************ 2025-05-30 01:15:42.587249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.587259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.587268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.587280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.587293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.587306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.587314 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.587378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.587425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.587437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.587499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.587506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587520 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.587561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587660 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587680 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.587705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587712 | orchestrator | 2025-05-30 01:15:42.587719 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-30 01:15:42.587726 | orchestrator | Friday 30 May 2025 01:12:14 +0000 (0:00:03.959) 0:04:42.092 ************ 2025-05-30 01:15:42.587736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.587744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.587754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.587780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.587798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.587808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.587881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.587900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.587907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.587925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.587939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.587946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.587957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.587964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.587975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.587988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.587995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.588002 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.588012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.588020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.588030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.588043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.588050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.588057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.588067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.588075 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.588086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.588100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.588108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.588115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.588122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.588132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.588139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.588150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.588162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.588169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.588176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.588187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.588194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.588334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.588351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.588358 | orchestrator | 2025-05-30 01:15:42.588365 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-30 01:15:42.588372 | orchestrator | Friday 30 May 2025 01:12:21 +0000 (0:00:07.455) 0:04:49.547 ************ 2025-05-30 01:15:42.588378 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.588385 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.588392 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.588398 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.588405 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.588411 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.588418 | orchestrator | 2025-05-30 01:15:42.588424 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-30 01:15:42.588431 | orchestrator | Friday 30 May 2025 01:12:23 +0000 (0:00:01.833) 0:04:51.380 ************ 2025-05-30 01:15:42.588438 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-30 01:15:42.588445 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-30 01:15:42.588451 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-30 01:15:42.588458 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-30 01:15:42.588465 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-30 01:15:42.588471 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.588478 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-30 01:15:42.588485 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.588491 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-30 01:15:42.588498 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.588504 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-30 01:15:42.588511 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-30 01:15:42.588517 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-30 01:15:42.588524 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-30 01:15:42.588535 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-30 01:15:42.588542 | orchestrator | 2025-05-30 01:15:42.588566 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-30 01:15:42.588579 | orchestrator | Friday 30 May 2025 01:12:29 +0000 (0:00:05.330) 0:04:56.710 ************ 2025-05-30 01:15:42.588586 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.588592 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.588599 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.588605 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.588612 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.588619 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.588625 | orchestrator | 2025-05-30 01:15:42.588632 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-30 01:15:42.588639 | orchestrator | Friday 30 May 2025 01:12:30 +0000 (0:00:00.929) 0:04:57.640 ************ 2025-05-30 01:15:42.588645 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-30 01:15:42.588652 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-30 01:15:42.588659 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-30 01:15:42.588666 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-30 01:15:42.588693 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-30 01:15:42.588701 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-30 01:15:42.588708 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-30 01:15:42.588715 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-30 01:15:42.588721 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-30 01:15:42.588728 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.588735 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-30 01:15:42.588742 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-30 01:15:42.588748 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.588755 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-30 01:15:42.588761 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.588768 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-30 01:15:42.588775 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-30 01:15:42.588782 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-30 01:15:42.588788 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-30 01:15:42.588795 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-30 01:15:42.588802 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-30 01:15:42.588808 | orchestrator | 2025-05-30 01:15:42.588815 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-30 01:15:42.588837 | orchestrator | Friday 30 May 2025 01:12:37 +0000 (0:00:06.942) 0:05:04.582 ************ 2025-05-30 01:15:42.588845 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-30 01:15:42.588852 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-30 01:15:42.588865 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-30 01:15:42.588872 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-30 01:15:42.588879 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-30 01:15:42.588885 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-30 01:15:42.588892 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-30 01:15:42.588899 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-30 01:15:42.588905 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-30 01:15:42.588912 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-30 01:15:42.588918 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-30 01:15:42.588929 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-30 01:15:42.588937 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-30 01:15:42.588945 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.588952 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-30 01:15:42.588960 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.588968 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-30 01:15:42.588976 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.588983 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-30 01:15:42.588991 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-30 01:15:42.588999 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-30 01:15:42.589006 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-30 01:15:42.589013 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-30 01:15:42.589021 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-30 01:15:42.589029 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-30 01:15:42.589036 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-30 01:15:42.589064 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-30 01:15:42.589073 | orchestrator | 2025-05-30 01:15:42.589104 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-30 01:15:42.589112 | orchestrator | Friday 30 May 2025 01:12:46 +0000 (0:00:09.691) 0:05:14.273 ************ 2025-05-30 01:15:42.589119 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.589127 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.589134 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.589141 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.589149 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.589157 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.589164 | orchestrator | 2025-05-30 01:15:42.589172 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-30 01:15:42.589180 | orchestrator | Friday 30 May 2025 01:12:47 +0000 (0:00:00.776) 0:05:15.050 ************ 2025-05-30 01:15:42.589188 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.589196 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.589203 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.589212 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.589225 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.589233 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.589241 | orchestrator | 2025-05-30 01:15:42.589248 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-30 01:15:42.589256 | orchestrator | Friday 30 May 2025 01:12:48 +0000 (0:00:00.977) 0:05:16.027 ************ 2025-05-30 01:15:42.589263 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.589271 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.589279 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.589286 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:15:42.589294 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:15:42.589302 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:15:42.589310 | orchestrator | 2025-05-30 01:15:42.589318 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-30 01:15:42.589325 | orchestrator | Friday 30 May 2025 01:12:51 +0000 (0:00:03.073) 0:05:19.101 ************ 2025-05-30 01:15:42.589332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.589340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.589351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.589378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.589386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.589399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.589451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.589477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.589490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.589497 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.589504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.589512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589536 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.589563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.589579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.589586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.589593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.589600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.589611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589622 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589642 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.589649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.589656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.589663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.589673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.589680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.589698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.589713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.589730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589741 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.589748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.589759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.589766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.589773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589797 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.589804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.589819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.589843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.589851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.589858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.589865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.589898 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.589904 | orchestrator | 2025-05-30 01:15:42.589911 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-30 01:15:42.589918 | orchestrator | Friday 30 May 2025 01:12:53 +0000 (0:00:02.040) 0:05:21.142 ************ 2025-05-30 01:15:42.589925 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-30 01:15:42.589932 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-30 01:15:42.589939 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.589945 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-30 01:15:42.589952 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-30 01:15:42.589959 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.589966 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-30 01:15:42.589972 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-30 01:15:42.589979 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.589986 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-30 01:15:42.589992 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-30 01:15:42.589999 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.590006 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-30 01:15:42.590012 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-30 01:15:42.590042 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.590049 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-30 01:15:42.590056 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-30 01:15:42.590063 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.590069 | orchestrator | 2025-05-30 01:15:42.590076 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-30 01:15:42.590083 | orchestrator | Friday 30 May 2025 01:12:54 +0000 (0:00:00.993) 0:05:22.136 ************ 2025-05-30 01:15:42.590090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.590105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.590113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.590124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.590132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-30 01:15:42.590139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-30 01:15:42.590146 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590172 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.590193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.590206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.590224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.590235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.590249 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.590269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.590279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.590286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.590298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.590306 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.590333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.590343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.590350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.590361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-30 01:15:42.590369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.590376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-30 01:15:42.590388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.590395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.590412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.590423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.590451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.590461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.590468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.590487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.590498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590505 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-30 01:15:42.590516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.590526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-30 01:15:42.590533 | orchestrator | 2025-05-30 01:15:42.590540 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-30 01:15:42.590547 | orchestrator | Friday 30 May 2025 01:12:57 +0000 (0:00:03.353) 0:05:25.489 ************ 2025-05-30 01:15:42.590553 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.590560 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.590567 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.590574 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.590580 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.590587 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.590594 | orchestrator | 2025-05-30 01:15:42.590601 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-30 01:15:42.590613 | orchestrator | Friday 30 May 2025 01:12:58 +0000 (0:00:00.946) 0:05:26.436 ************ 2025-05-30 01:15:42.590620 | orchestrator | 2025-05-30 01:15:42.590626 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-30 01:15:42.590633 | orchestrator | Friday 30 May 2025 01:12:58 +0000 (0:00:00.109) 0:05:26.545 ************ 2025-05-30 01:15:42.590640 | orchestrator | 2025-05-30 01:15:42.590647 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-30 01:15:42.590653 | orchestrator | Friday 30 May 2025 01:12:59 +0000 (0:00:00.303) 0:05:26.849 ************ 2025-05-30 01:15:42.590660 | orchestrator | 2025-05-30 01:15:42.590667 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-30 01:15:42.590673 | orchestrator | Friday 30 May 2025 01:12:59 +0000 (0:00:00.113) 0:05:26.963 ************ 2025-05-30 01:15:42.590680 | orchestrator | 2025-05-30 01:15:42.590687 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-30 01:15:42.590693 | orchestrator | Friday 30 May 2025 01:12:59 +0000 (0:00:00.309) 0:05:27.272 ************ 2025-05-30 01:15:42.590700 | orchestrator | 2025-05-30 01:15:42.590707 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-30 01:15:42.590713 | orchestrator | Friday 30 May 2025 01:12:59 +0000 (0:00:00.108) 0:05:27.381 ************ 2025-05-30 01:15:42.590720 | orchestrator | 2025-05-30 01:15:42.590727 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-30 01:15:42.590734 | orchestrator | Friday 30 May 2025 01:13:00 +0000 (0:00:00.291) 0:05:27.672 ************ 2025-05-30 01:15:42.590740 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.590747 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:15:42.590754 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:15:42.590760 | orchestrator | 2025-05-30 01:15:42.590767 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-30 01:15:42.590774 | orchestrator | Friday 30 May 2025 01:13:07 +0000 (0:00:07.581) 0:05:35.254 ************ 2025-05-30 01:15:42.590781 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.590787 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:15:42.590794 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:15:42.590800 | orchestrator | 2025-05-30 01:15:42.590807 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-30 01:15:42.590814 | orchestrator | Friday 30 May 2025 01:13:23 +0000 (0:00:16.033) 0:05:51.287 ************ 2025-05-30 01:15:42.590820 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:15:42.590847 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:15:42.590854 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:15:42.590860 | orchestrator | 2025-05-30 01:15:42.590867 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-30 01:15:42.590874 | orchestrator | Friday 30 May 2025 01:13:44 +0000 (0:00:21.028) 0:06:12.316 ************ 2025-05-30 01:15:42.590880 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:15:42.590887 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:15:42.590894 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:15:42.590900 | orchestrator | 2025-05-30 01:15:42.590907 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-30 01:15:42.590914 | orchestrator | Friday 30 May 2025 01:14:10 +0000 (0:00:25.815) 0:06:38.131 ************ 2025-05-30 01:15:42.590920 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:15:42.590927 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:15:42.590934 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:15:42.590940 | orchestrator | 2025-05-30 01:15:42.590947 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-30 01:15:42.590957 | orchestrator | Friday 30 May 2025 01:14:11 +0000 (0:00:00.726) 0:06:38.857 ************ 2025-05-30 01:15:42.590964 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:15:42.590971 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:15:42.590978 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:15:42.590985 | orchestrator | 2025-05-30 01:15:42.590991 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-30 01:15:42.591004 | orchestrator | Friday 30 May 2025 01:14:12 +0000 (0:00:00.971) 0:06:39.829 ************ 2025-05-30 01:15:42.591011 | orchestrator | changed: [testbed-node-4] 2025-05-30 01:15:42.591017 | orchestrator | changed: [testbed-node-3] 2025-05-30 01:15:42.591024 | orchestrator | changed: [testbed-node-5] 2025-05-30 01:15:42.591031 | orchestrator | 2025-05-30 01:15:42.591037 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-30 01:15:42.591044 | orchestrator | Friday 30 May 2025 01:14:34 +0000 (0:00:21.938) 0:07:01.768 ************ 2025-05-30 01:15:42.591051 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.591058 | orchestrator | 2025-05-30 01:15:42.591064 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-30 01:15:42.591071 | orchestrator | Friday 30 May 2025 01:14:34 +0000 (0:00:00.134) 0:07:01.902 ************ 2025-05-30 01:15:42.591078 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.591085 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.591091 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.591098 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.591105 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.591112 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-30 01:15:42.591119 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-30 01:15:42.591126 | orchestrator | 2025-05-30 01:15:42.591136 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-30 01:15:42.591143 | orchestrator | Friday 30 May 2025 01:14:57 +0000 (0:00:23.131) 0:07:25.034 ************ 2025-05-30 01:15:42.591149 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.591156 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.591163 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.591169 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.591176 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.591183 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.591189 | orchestrator | 2025-05-30 01:15:42.591196 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-30 01:15:42.591203 | orchestrator | Friday 30 May 2025 01:15:07 +0000 (0:00:09.751) 0:07:34.785 ************ 2025-05-30 01:15:42.591210 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.591216 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.591223 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.591230 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.591236 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.591243 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-05-30 01:15:42.591250 | orchestrator | 2025-05-30 01:15:42.591256 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-30 01:15:42.591263 | orchestrator | Friday 30 May 2025 01:15:10 +0000 (0:00:03.232) 0:07:38.018 ************ 2025-05-30 01:15:42.591270 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-30 01:15:42.591276 | orchestrator | 2025-05-30 01:15:42.591283 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-30 01:15:42.591290 | orchestrator | Friday 30 May 2025 01:15:20 +0000 (0:00:10.089) 0:07:48.107 ************ 2025-05-30 01:15:42.591297 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-30 01:15:42.591303 | orchestrator | 2025-05-30 01:15:42.591310 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-30 01:15:42.591317 | orchestrator | Friday 30 May 2025 01:15:21 +0000 (0:00:01.184) 0:07:49.291 ************ 2025-05-30 01:15:42.591323 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.591330 | orchestrator | 2025-05-30 01:15:42.591337 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-30 01:15:42.591344 | orchestrator | Friday 30 May 2025 01:15:22 +0000 (0:00:01.149) 0:07:50.441 ************ 2025-05-30 01:15:42.591358 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-30 01:15:42.591365 | orchestrator | 2025-05-30 01:15:42.591372 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-30 01:15:42.591378 | orchestrator | Friday 30 May 2025 01:15:31 +0000 (0:00:08.865) 0:07:59.307 ************ 2025-05-30 01:15:42.591385 | orchestrator | ok: [testbed-node-3] 2025-05-30 01:15:42.591392 | orchestrator | ok: [testbed-node-4] 2025-05-30 01:15:42.591399 | orchestrator | ok: [testbed-node-5] 2025-05-30 01:15:42.591405 | orchestrator | ok: [testbed-node-0] 2025-05-30 01:15:42.591412 | orchestrator | ok: [testbed-node-1] 2025-05-30 01:15:42.591419 | orchestrator | ok: [testbed-node-2] 2025-05-30 01:15:42.591425 | orchestrator | 2025-05-30 01:15:42.591432 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-30 01:15:42.591439 | orchestrator | 2025-05-30 01:15:42.591445 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-30 01:15:42.591452 | orchestrator | Friday 30 May 2025 01:15:33 +0000 (0:00:02.169) 0:08:01.477 ************ 2025-05-30 01:15:42.591459 | orchestrator | changed: [testbed-node-0] 2025-05-30 01:15:42.591466 | orchestrator | changed: [testbed-node-1] 2025-05-30 01:15:42.591472 | orchestrator | changed: [testbed-node-2] 2025-05-30 01:15:42.591479 | orchestrator | 2025-05-30 01:15:42.591486 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-30 01:15:42.591492 | orchestrator | 2025-05-30 01:15:42.591499 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-30 01:15:42.591506 | orchestrator | Friday 30 May 2025 01:15:34 +0000 (0:00:01.023) 0:08:02.500 ************ 2025-05-30 01:15:42.591512 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.591519 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.591526 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.591532 | orchestrator | 2025-05-30 01:15:42.591539 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-30 01:15:42.591546 | orchestrator | 2025-05-30 01:15:42.591556 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-30 01:15:42.591563 | orchestrator | Friday 30 May 2025 01:15:35 +0000 (0:00:00.793) 0:08:03.294 ************ 2025-05-30 01:15:42.591570 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-30 01:15:42.591576 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-30 01:15:42.591583 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-30 01:15:42.591590 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-30 01:15:42.591597 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-30 01:15:42.591603 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-30 01:15:42.591610 | orchestrator | skipping: [testbed-node-3] 2025-05-30 01:15:42.591617 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-30 01:15:42.591624 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-30 01:15:42.591630 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-30 01:15:42.591637 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-30 01:15:42.591644 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-30 01:15:42.591650 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-30 01:15:42.591657 | orchestrator | skipping: [testbed-node-4] 2025-05-30 01:15:42.591664 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-30 01:15:42.591671 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-30 01:15:42.591677 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-30 01:15:42.591687 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-30 01:15:42.591694 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-30 01:15:42.591706 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-30 01:15:42.591713 | orchestrator | skipping: [testbed-node-5] 2025-05-30 01:15:42.591720 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-30 01:15:42.591726 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-30 01:15:42.591733 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-30 01:15:42.591740 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-30 01:15:42.591747 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-30 01:15:42.591753 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-30 01:15:42.591760 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.591767 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-30 01:15:42.591773 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-30 01:15:42.591780 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-30 01:15:42.591787 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-30 01:15:42.591794 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-30 01:15:42.591800 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-30 01:15:42.591807 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.591814 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-30 01:15:42.591820 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-30 01:15:42.591846 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-30 01:15:42.591853 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-30 01:15:42.591859 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-30 01:15:42.591866 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-30 01:15:42.591873 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.591880 | orchestrator | 2025-05-30 01:15:42.591886 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-30 01:15:42.591893 | orchestrator | 2025-05-30 01:15:42.591900 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-30 01:15:42.591906 | orchestrator | Friday 30 May 2025 01:15:37 +0000 (0:00:01.407) 0:08:04.701 ************ 2025-05-30 01:15:42.591913 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-30 01:15:42.591920 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-30 01:15:42.591926 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.591933 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-30 01:15:42.591940 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-30 01:15:42.591947 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.591953 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-30 01:15:42.591960 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-30 01:15:42.591967 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.591973 | orchestrator | 2025-05-30 01:15:42.591980 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-30 01:15:42.591987 | orchestrator | 2025-05-30 01:15:42.591994 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-30 01:15:42.592000 | orchestrator | Friday 30 May 2025 01:15:37 +0000 (0:00:00.805) 0:08:05.507 ************ 2025-05-30 01:15:42.592007 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.592014 | orchestrator | 2025-05-30 01:15:42.592020 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-30 01:15:42.592027 | orchestrator | 2025-05-30 01:15:42.592034 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-30 01:15:42.592040 | orchestrator | Friday 30 May 2025 01:15:38 +0000 (0:00:00.919) 0:08:06.426 ************ 2025-05-30 01:15:42.592047 | orchestrator | skipping: [testbed-node-0] 2025-05-30 01:15:42.592059 | orchestrator | skipping: [testbed-node-1] 2025-05-30 01:15:42.592065 | orchestrator | skipping: [testbed-node-2] 2025-05-30 01:15:42.592072 | orchestrator | 2025-05-30 01:15:42.592082 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-30 01:15:42.592089 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-30 01:15:42.592097 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-30 01:15:42.592104 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-30 01:15:42.592110 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-30 01:15:42.592117 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-30 01:15:42.592124 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-30 01:15:42.592131 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-05-30 01:15:42.592137 | orchestrator | 2025-05-30 01:15:42.592144 | orchestrator | 2025-05-30 01:15:42.592154 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-30 01:15:42.592161 | orchestrator | Friday 30 May 2025 01:15:39 +0000 (0:00:00.554) 0:08:06.980 ************ 2025-05-30 01:15:42.592168 | orchestrator | =============================================================================== 2025-05-30 01:15:42.592175 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.03s 2025-05-30 01:15:42.592182 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 25.82s 2025-05-30 01:15:42.592188 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.13s 2025-05-30 01:15:42.592195 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.94s 2025-05-30 01:15:42.592202 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.03s 2025-05-30 01:15:42.592209 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 18.81s 2025-05-30 01:15:42.592215 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 16.77s 2025-05-30 01:15:42.592222 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.03s 2025-05-30 01:15:42.592229 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 14.58s 2025-05-30 01:15:42.592235 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 11.93s 2025-05-30 01:15:42.592242 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.32s 2025-05-30 01:15:42.592249 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.28s 2025-05-30 01:15:42.592255 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.24s 2025-05-30 01:15:42.592262 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.09s 2025-05-30 01:15:42.592269 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.75s 2025-05-30 01:15:42.592275 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 9.69s 2025-05-30 01:15:42.592283 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 8.87s 2025-05-30 01:15:42.592289 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.27s 2025-05-30 01:15:42.592296 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.20s 2025-05-30 01:15:42.592308 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.58s 2025-05-30 01:15:42.592315 | orchestrator | 2025-05-30 01:15:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:45.630529 | orchestrator | 2025-05-30 01:15:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:45.630631 | orchestrator | 2025-05-30 01:15:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:48.678361 | orchestrator | 2025-05-30 01:15:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:48.678472 | orchestrator | 2025-05-30 01:15:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:51.732990 | orchestrator | 2025-05-30 01:15:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:51.733093 | orchestrator | 2025-05-30 01:15:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:54.783026 | orchestrator | 2025-05-30 01:15:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:54.783138 | orchestrator | 2025-05-30 01:15:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:15:57.837509 | orchestrator | 2025-05-30 01:15:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:15:57.837596 | orchestrator | 2025-05-30 01:15:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:00.888790 | orchestrator | 2025-05-30 01:16:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:00.888927 | orchestrator | 2025-05-30 01:16:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:03.937067 | orchestrator | 2025-05-30 01:16:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:03.937151 | orchestrator | 2025-05-30 01:16:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:06.985690 | orchestrator | 2025-05-30 01:16:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:06.985800 | orchestrator | 2025-05-30 01:16:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:10.034572 | orchestrator | 2025-05-30 01:16:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:10.034678 | orchestrator | 2025-05-30 01:16:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:13.074127 | orchestrator | 2025-05-30 01:16:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:13.074237 | orchestrator | 2025-05-30 01:16:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:16.120166 | orchestrator | 2025-05-30 01:16:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:16.120255 | orchestrator | 2025-05-30 01:16:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:19.174290 | orchestrator | 2025-05-30 01:16:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:19.174387 | orchestrator | 2025-05-30 01:16:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:22.224723 | orchestrator | 2025-05-30 01:16:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:22.224927 | orchestrator | 2025-05-30 01:16:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:25.274644 | orchestrator | 2025-05-30 01:16:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:25.274762 | orchestrator | 2025-05-30 01:16:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:28.323505 | orchestrator | 2025-05-30 01:16:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:28.323643 | orchestrator | 2025-05-30 01:16:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:31.375248 | orchestrator | 2025-05-30 01:16:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:31.375349 | orchestrator | 2025-05-30 01:16:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:34.427593 | orchestrator | 2025-05-30 01:16:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:34.427680 | orchestrator | 2025-05-30 01:16:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:37.477020 | orchestrator | 2025-05-30 01:16:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:37.477120 | orchestrator | 2025-05-30 01:16:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:40.524624 | orchestrator | 2025-05-30 01:16:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:40.524725 | orchestrator | 2025-05-30 01:16:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:43.574807 | orchestrator | 2025-05-30 01:16:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:43.574960 | orchestrator | 2025-05-30 01:16:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:46.620966 | orchestrator | 2025-05-30 01:16:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:46.621096 | orchestrator | 2025-05-30 01:16:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:49.674521 | orchestrator | 2025-05-30 01:16:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:49.674635 | orchestrator | 2025-05-30 01:16:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:52.726250 | orchestrator | 2025-05-30 01:16:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:52.726367 | orchestrator | 2025-05-30 01:16:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:55.774920 | orchestrator | 2025-05-30 01:16:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:55.775053 | orchestrator | 2025-05-30 01:16:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:16:58.821836 | orchestrator | 2025-05-30 01:16:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:16:58.822006 | orchestrator | 2025-05-30 01:16:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:01.872552 | orchestrator | 2025-05-30 01:17:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:01.872698 | orchestrator | 2025-05-30 01:17:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:04.913297 | orchestrator | 2025-05-30 01:17:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:04.913406 | orchestrator | 2025-05-30 01:17:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:07.963770 | orchestrator | 2025-05-30 01:17:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:07.963943 | orchestrator | 2025-05-30 01:17:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:11.015224 | orchestrator | 2025-05-30 01:17:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:11.015330 | orchestrator | 2025-05-30 01:17:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:14.065407 | orchestrator | 2025-05-30 01:17:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:14.065500 | orchestrator | 2025-05-30 01:17:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:17.115328 | orchestrator | 2025-05-30 01:17:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:17.115426 | orchestrator | 2025-05-30 01:17:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:20.169440 | orchestrator | 2025-05-30 01:17:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:20.169555 | orchestrator | 2025-05-30 01:17:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:23.219939 | orchestrator | 2025-05-30 01:17:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:23.220043 | orchestrator | 2025-05-30 01:17:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:26.271641 | orchestrator | 2025-05-30 01:17:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:26.271757 | orchestrator | 2025-05-30 01:17:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:29.330910 | orchestrator | 2025-05-30 01:17:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:29.331036 | orchestrator | 2025-05-30 01:17:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:32.386479 | orchestrator | 2025-05-30 01:17:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:32.386594 | orchestrator | 2025-05-30 01:17:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:35.434769 | orchestrator | 2025-05-30 01:17:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:35.434910 | orchestrator | 2025-05-30 01:17:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:38.480822 | orchestrator | 2025-05-30 01:17:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:38.480967 | orchestrator | 2025-05-30 01:17:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:41.528274 | orchestrator | 2025-05-30 01:17:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:41.528376 | orchestrator | 2025-05-30 01:17:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:44.581333 | orchestrator | 2025-05-30 01:17:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:44.581451 | orchestrator | 2025-05-30 01:17:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:47.625481 | orchestrator | 2025-05-30 01:17:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:47.625678 | orchestrator | 2025-05-30 01:17:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:50.668713 | orchestrator | 2025-05-30 01:17:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:50.668841 | orchestrator | 2025-05-30 01:17:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:53.722272 | orchestrator | 2025-05-30 01:17:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:53.722323 | orchestrator | 2025-05-30 01:17:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:56.768137 | orchestrator | 2025-05-30 01:17:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:56.768237 | orchestrator | 2025-05-30 01:17:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:17:59.822341 | orchestrator | 2025-05-30 01:17:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:17:59.822439 | orchestrator | 2025-05-30 01:17:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:02.871994 | orchestrator | 2025-05-30 01:18:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:02.873386 | orchestrator | 2025-05-30 01:18:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:05.924595 | orchestrator | 2025-05-30 01:18:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:05.924691 | orchestrator | 2025-05-30 01:18:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:08.981140 | orchestrator | 2025-05-30 01:18:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:08.981282 | orchestrator | 2025-05-30 01:18:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:12.028003 | orchestrator | 2025-05-30 01:18:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:12.028108 | orchestrator | 2025-05-30 01:18:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:15.095353 | orchestrator | 2025-05-30 01:18:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:15.095484 | orchestrator | 2025-05-30 01:18:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:18.142825 | orchestrator | 2025-05-30 01:18:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:18.142986 | orchestrator | 2025-05-30 01:18:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:21.194188 | orchestrator | 2025-05-30 01:18:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:21.194340 | orchestrator | 2025-05-30 01:18:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:24.246215 | orchestrator | 2025-05-30 01:18:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:24.246319 | orchestrator | 2025-05-30 01:18:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:27.289656 | orchestrator | 2025-05-30 01:18:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:27.289803 | orchestrator | 2025-05-30 01:18:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:30.336939 | orchestrator | 2025-05-30 01:18:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:30.337042 | orchestrator | 2025-05-30 01:18:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:33.396177 | orchestrator | 2025-05-30 01:18:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:33.396275 | orchestrator | 2025-05-30 01:18:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:36.447209 | orchestrator | 2025-05-30 01:18:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:36.447294 | orchestrator | 2025-05-30 01:18:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:39.499144 | orchestrator | 2025-05-30 01:18:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:39.499234 | orchestrator | 2025-05-30 01:18:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:42.552159 | orchestrator | 2025-05-30 01:18:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:42.552262 | orchestrator | 2025-05-30 01:18:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:45.597694 | orchestrator | 2025-05-30 01:18:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:45.597787 | orchestrator | 2025-05-30 01:18:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:48.643770 | orchestrator | 2025-05-30 01:18:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:48.643918 | orchestrator | 2025-05-30 01:18:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:51.695807 | orchestrator | 2025-05-30 01:18:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:51.695943 | orchestrator | 2025-05-30 01:18:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:54.755038 | orchestrator | 2025-05-30 01:18:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:54.755126 | orchestrator | 2025-05-30 01:18:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:18:57.828260 | orchestrator | 2025-05-30 01:18:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:18:57.828380 | orchestrator | 2025-05-30 01:18:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:00.884263 | orchestrator | 2025-05-30 01:19:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:00.884366 | orchestrator | 2025-05-30 01:19:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:03.937755 | orchestrator | 2025-05-30 01:19:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:03.937909 | orchestrator | 2025-05-30 01:19:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:06.994516 | orchestrator | 2025-05-30 01:19:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:06.994616 | orchestrator | 2025-05-30 01:19:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:10.052951 | orchestrator | 2025-05-30 01:19:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:10.053067 | orchestrator | 2025-05-30 01:19:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:13.107247 | orchestrator | 2025-05-30 01:19:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:13.107368 | orchestrator | 2025-05-30 01:19:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:16.166835 | orchestrator | 2025-05-30 01:19:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:16.167006 | orchestrator | 2025-05-30 01:19:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:19.220566 | orchestrator | 2025-05-30 01:19:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:19.220671 | orchestrator | 2025-05-30 01:19:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:22.272282 | orchestrator | 2025-05-30 01:19:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:22.272409 | orchestrator | 2025-05-30 01:19:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:25.324834 | orchestrator | 2025-05-30 01:19:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:25.324991 | orchestrator | 2025-05-30 01:19:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:28.370322 | orchestrator | 2025-05-30 01:19:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:28.370451 | orchestrator | 2025-05-30 01:19:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:31.415537 | orchestrator | 2025-05-30 01:19:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:31.415607 | orchestrator | 2025-05-30 01:19:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:34.465209 | orchestrator | 2025-05-30 01:19:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:34.465319 | orchestrator | 2025-05-30 01:19:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:37.520174 | orchestrator | 2025-05-30 01:19:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:37.520276 | orchestrator | 2025-05-30 01:19:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:40.573284 | orchestrator | 2025-05-30 01:19:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:40.573401 | orchestrator | 2025-05-30 01:19:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:43.621761 | orchestrator | 2025-05-30 01:19:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:43.621914 | orchestrator | 2025-05-30 01:19:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:46.671184 | orchestrator | 2025-05-30 01:19:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:46.671330 | orchestrator | 2025-05-30 01:19:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:49.711467 | orchestrator | 2025-05-30 01:19:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:49.711561 | orchestrator | 2025-05-30 01:19:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:52.767628 | orchestrator | 2025-05-30 01:19:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:52.767743 | orchestrator | 2025-05-30 01:19:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:55.819031 | orchestrator | 2025-05-30 01:19:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:55.819134 | orchestrator | 2025-05-30 01:19:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:19:58.869909 | orchestrator | 2025-05-30 01:19:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:19:58.870091 | orchestrator | 2025-05-30 01:19:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:01.923488 | orchestrator | 2025-05-30 01:20:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:01.923614 | orchestrator | 2025-05-30 01:20:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:04.972786 | orchestrator | 2025-05-30 01:20:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:04.972973 | orchestrator | 2025-05-30 01:20:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:08.029412 | orchestrator | 2025-05-30 01:20:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:08.029515 | orchestrator | 2025-05-30 01:20:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:11.075685 | orchestrator | 2025-05-30 01:20:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:11.075783 | orchestrator | 2025-05-30 01:20:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:14.124403 | orchestrator | 2025-05-30 01:20:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:14.124487 | orchestrator | 2025-05-30 01:20:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:17.177354 | orchestrator | 2025-05-30 01:20:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:17.177444 | orchestrator | 2025-05-30 01:20:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:20.237414 | orchestrator | 2025-05-30 01:20:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:20.237519 | orchestrator | 2025-05-30 01:20:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:23.296933 | orchestrator | 2025-05-30 01:20:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:23.297019 | orchestrator | 2025-05-30 01:20:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:26.352596 | orchestrator | 2025-05-30 01:20:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:26.352700 | orchestrator | 2025-05-30 01:20:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:29.406771 | orchestrator | 2025-05-30 01:20:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:29.406965 | orchestrator | 2025-05-30 01:20:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:32.458673 | orchestrator | 2025-05-30 01:20:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:32.458758 | orchestrator | 2025-05-30 01:20:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:35.511935 | orchestrator | 2025-05-30 01:20:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:35.512048 | orchestrator | 2025-05-30 01:20:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:38.569826 | orchestrator | 2025-05-30 01:20:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:38.569969 | orchestrator | 2025-05-30 01:20:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:41.616362 | orchestrator | 2025-05-30 01:20:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:41.616495 | orchestrator | 2025-05-30 01:20:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:44.675313 | orchestrator | 2025-05-30 01:20:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:44.675417 | orchestrator | 2025-05-30 01:20:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:47.731684 | orchestrator | 2025-05-30 01:20:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:47.731818 | orchestrator | 2025-05-30 01:20:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:50.780591 | orchestrator | 2025-05-30 01:20:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:50.780681 | orchestrator | 2025-05-30 01:20:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:53.828885 | orchestrator | 2025-05-30 01:20:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:53.828983 | orchestrator | 2025-05-30 01:20:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:56.883017 | orchestrator | 2025-05-30 01:20:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:56.883105 | orchestrator | 2025-05-30 01:20:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:20:59.931557 | orchestrator | 2025-05-30 01:20:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:20:59.931668 | orchestrator | 2025-05-30 01:20:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:02.979783 | orchestrator | 2025-05-30 01:21:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:02.980397 | orchestrator | 2025-05-30 01:21:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:06.032316 | orchestrator | 2025-05-30 01:21:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:06.032424 | orchestrator | 2025-05-30 01:21:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:09.075794 | orchestrator | 2025-05-30 01:21:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:09.075919 | orchestrator | 2025-05-30 01:21:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:12.125325 | orchestrator | 2025-05-30 01:21:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:12.125442 | orchestrator | 2025-05-30 01:21:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:15.172205 | orchestrator | 2025-05-30 01:21:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:15.172312 | orchestrator | 2025-05-30 01:21:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:18.221939 | orchestrator | 2025-05-30 01:21:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:18.222113 | orchestrator | 2025-05-30 01:21:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:21.273696 | orchestrator | 2025-05-30 01:21:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:21.273804 | orchestrator | 2025-05-30 01:21:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:24.328814 | orchestrator | 2025-05-30 01:21:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:24.328992 | orchestrator | 2025-05-30 01:21:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:27.385596 | orchestrator | 2025-05-30 01:21:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:27.385697 | orchestrator | 2025-05-30 01:21:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:30.446635 | orchestrator | 2025-05-30 01:21:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:30.446715 | orchestrator | 2025-05-30 01:21:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:33.500735 | orchestrator | 2025-05-30 01:21:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:33.500826 | orchestrator | 2025-05-30 01:21:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:36.550708 | orchestrator | 2025-05-30 01:21:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:36.550824 | orchestrator | 2025-05-30 01:21:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:39.599729 | orchestrator | 2025-05-30 01:21:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:39.599897 | orchestrator | 2025-05-30 01:21:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:42.653059 | orchestrator | 2025-05-30 01:21:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:42.653167 | orchestrator | 2025-05-30 01:21:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:45.701178 | orchestrator | 2025-05-30 01:21:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:45.701261 | orchestrator | 2025-05-30 01:21:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:48.746812 | orchestrator | 2025-05-30 01:21:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:48.746982 | orchestrator | 2025-05-30 01:21:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:51.798729 | orchestrator | 2025-05-30 01:21:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:51.798901 | orchestrator | 2025-05-30 01:21:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:54.851954 | orchestrator | 2025-05-30 01:21:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:54.852069 | orchestrator | 2025-05-30 01:21:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:21:57.910331 | orchestrator | 2025-05-30 01:21:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:21:57.910435 | orchestrator | 2025-05-30 01:21:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:00.964984 | orchestrator | 2025-05-30 01:22:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:00.965099 | orchestrator | 2025-05-30 01:22:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:04.007486 | orchestrator | 2025-05-30 01:22:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:04.007614 | orchestrator | 2025-05-30 01:22:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:07.059553 | orchestrator | 2025-05-30 01:22:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:07.059669 | orchestrator | 2025-05-30 01:22:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:10.104883 | orchestrator | 2025-05-30 01:22:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:10.104973 | orchestrator | 2025-05-30 01:22:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:13.153460 | orchestrator | 2025-05-30 01:22:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:13.153579 | orchestrator | 2025-05-30 01:22:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:16.207218 | orchestrator | 2025-05-30 01:22:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:16.207322 | orchestrator | 2025-05-30 01:22:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:19.264758 | orchestrator | 2025-05-30 01:22:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:19.264933 | orchestrator | 2025-05-30 01:22:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:22.319769 | orchestrator | 2025-05-30 01:22:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:22.319934 | orchestrator | 2025-05-30 01:22:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:25.368256 | orchestrator | 2025-05-30 01:22:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:25.368356 | orchestrator | 2025-05-30 01:22:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:28.421157 | orchestrator | 2025-05-30 01:22:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:28.421286 | orchestrator | 2025-05-30 01:22:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:31.480347 | orchestrator | 2025-05-30 01:22:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:31.480465 | orchestrator | 2025-05-30 01:22:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:34.535109 | orchestrator | 2025-05-30 01:22:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:34.535212 | orchestrator | 2025-05-30 01:22:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:37.584986 | orchestrator | 2025-05-30 01:22:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:37.585095 | orchestrator | 2025-05-30 01:22:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:40.638412 | orchestrator | 2025-05-30 01:22:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:40.638522 | orchestrator | 2025-05-30 01:22:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:43.695138 | orchestrator | 2025-05-30 01:22:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:43.695251 | orchestrator | 2025-05-30 01:22:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:46.746226 | orchestrator | 2025-05-30 01:22:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:46.746313 | orchestrator | 2025-05-30 01:22:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:49.795537 | orchestrator | 2025-05-30 01:22:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:49.795652 | orchestrator | 2025-05-30 01:22:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:52.848144 | orchestrator | 2025-05-30 01:22:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:52.848250 | orchestrator | 2025-05-30 01:22:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:55.900313 | orchestrator | 2025-05-30 01:22:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:55.900394 | orchestrator | 2025-05-30 01:22:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:22:58.954635 | orchestrator | 2025-05-30 01:22:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:22:58.954742 | orchestrator | 2025-05-30 01:22:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:02.001583 | orchestrator | 2025-05-30 01:23:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:02.001729 | orchestrator | 2025-05-30 01:23:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:05.055981 | orchestrator | 2025-05-30 01:23:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:05.056082 | orchestrator | 2025-05-30 01:23:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:08.106941 | orchestrator | 2025-05-30 01:23:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:08.107081 | orchestrator | 2025-05-30 01:23:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:11.157652 | orchestrator | 2025-05-30 01:23:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:11.157757 | orchestrator | 2025-05-30 01:23:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:14.205718 | orchestrator | 2025-05-30 01:23:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:14.205902 | orchestrator | 2025-05-30 01:23:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:17.262300 | orchestrator | 2025-05-30 01:23:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:17.262401 | orchestrator | 2025-05-30 01:23:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:20.317592 | orchestrator | 2025-05-30 01:23:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:20.317703 | orchestrator | 2025-05-30 01:23:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:23.373313 | orchestrator | 2025-05-30 01:23:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:23.373427 | orchestrator | 2025-05-30 01:23:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:26.430462 | orchestrator | 2025-05-30 01:23:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:26.430627 | orchestrator | 2025-05-30 01:23:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:29.478966 | orchestrator | 2025-05-30 01:23:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:29.479074 | orchestrator | 2025-05-30 01:23:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:32.521286 | orchestrator | 2025-05-30 01:23:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:32.521391 | orchestrator | 2025-05-30 01:23:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:35.579645 | orchestrator | 2025-05-30 01:23:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:35.579759 | orchestrator | 2025-05-30 01:23:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:38.636376 | orchestrator | 2025-05-30 01:23:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:38.636486 | orchestrator | 2025-05-30 01:23:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:41.691495 | orchestrator | 2025-05-30 01:23:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:41.691583 | orchestrator | 2025-05-30 01:23:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:44.746519 | orchestrator | 2025-05-30 01:23:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:44.746652 | orchestrator | 2025-05-30 01:23:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:47.800377 | orchestrator | 2025-05-30 01:23:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:47.800512 | orchestrator | 2025-05-30 01:23:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:50.852773 | orchestrator | 2025-05-30 01:23:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:50.852935 | orchestrator | 2025-05-30 01:23:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:53.908971 | orchestrator | 2025-05-30 01:23:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:53.909077 | orchestrator | 2025-05-30 01:23:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:23:56.962187 | orchestrator | 2025-05-30 01:23:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:23:56.962283 | orchestrator | 2025-05-30 01:23:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:00.021059 | orchestrator | 2025-05-30 01:24:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:00.021152 | orchestrator | 2025-05-30 01:24:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:03.073125 | orchestrator | 2025-05-30 01:24:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:03.073423 | orchestrator | 2025-05-30 01:24:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:06.121136 | orchestrator | 2025-05-30 01:24:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:06.121244 | orchestrator | 2025-05-30 01:24:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:09.169541 | orchestrator | 2025-05-30 01:24:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:09.169636 | orchestrator | 2025-05-30 01:24:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:12.223424 | orchestrator | 2025-05-30 01:24:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:12.223496 | orchestrator | 2025-05-30 01:24:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:15.282161 | orchestrator | 2025-05-30 01:24:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:15.282257 | orchestrator | 2025-05-30 01:24:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:18.325952 | orchestrator | 2025-05-30 01:24:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:18.326168 | orchestrator | 2025-05-30 01:24:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:21.379990 | orchestrator | 2025-05-30 01:24:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:21.380081 | orchestrator | 2025-05-30 01:24:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:24.434368 | orchestrator | 2025-05-30 01:24:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:24.434475 | orchestrator | 2025-05-30 01:24:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:27.494191 | orchestrator | 2025-05-30 01:24:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:27.495055 | orchestrator | 2025-05-30 01:24:27 | INFO  | Task f0c08481-26ae-4171-94f3-6087c5d50bf5 is in state STARTED 2025-05-30 01:24:27.495092 | orchestrator | 2025-05-30 01:24:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:30.546654 | orchestrator | 2025-05-30 01:24:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:30.547560 | orchestrator | 2025-05-30 01:24:30 | INFO  | Task f0c08481-26ae-4171-94f3-6087c5d50bf5 is in state STARTED 2025-05-30 01:24:30.548137 | orchestrator | 2025-05-30 01:24:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:33.610566 | orchestrator | 2025-05-30 01:24:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:33.612963 | orchestrator | 2025-05-30 01:24:33 | INFO  | Task f0c08481-26ae-4171-94f3-6087c5d50bf5 is in state STARTED 2025-05-30 01:24:33.613373 | orchestrator | 2025-05-30 01:24:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:36.678489 | orchestrator | 2025-05-30 01:24:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:36.680119 | orchestrator | 2025-05-30 01:24:36 | INFO  | Task f0c08481-26ae-4171-94f3-6087c5d50bf5 is in state STARTED 2025-05-30 01:24:36.680376 | orchestrator | 2025-05-30 01:24:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:39.732426 | orchestrator | 2025-05-30 01:24:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:39.733962 | orchestrator | 2025-05-30 01:24:39 | INFO  | Task f0c08481-26ae-4171-94f3-6087c5d50bf5 is in state SUCCESS 2025-05-30 01:24:39.734007 | orchestrator | 2025-05-30 01:24:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:42.776872 | orchestrator | 2025-05-30 01:24:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:42.776983 | orchestrator | 2025-05-30 01:24:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:45.831500 | orchestrator | 2025-05-30 01:24:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:45.831618 | orchestrator | 2025-05-30 01:24:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:48.881237 | orchestrator | 2025-05-30 01:24:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:48.881347 | orchestrator | 2025-05-30 01:24:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:51.932704 | orchestrator | 2025-05-30 01:24:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:51.932904 | orchestrator | 2025-05-30 01:24:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:54.979643 | orchestrator | 2025-05-30 01:24:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:54.979736 | orchestrator | 2025-05-30 01:24:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:24:58.029056 | orchestrator | 2025-05-30 01:24:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:24:58.029152 | orchestrator | 2025-05-30 01:24:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:01.084918 | orchestrator | 2025-05-30 01:25:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:01.085032 | orchestrator | 2025-05-30 01:25:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:04.137043 | orchestrator | 2025-05-30 01:25:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:04.137127 | orchestrator | 2025-05-30 01:25:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:07.191013 | orchestrator | 2025-05-30 01:25:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:07.191265 | orchestrator | 2025-05-30 01:25:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:10.238437 | orchestrator | 2025-05-30 01:25:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:10.238573 | orchestrator | 2025-05-30 01:25:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:13.284140 | orchestrator | 2025-05-30 01:25:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:13.284234 | orchestrator | 2025-05-30 01:25:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:16.333984 | orchestrator | 2025-05-30 01:25:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:16.334232 | orchestrator | 2025-05-30 01:25:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:19.388415 | orchestrator | 2025-05-30 01:25:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:19.388505 | orchestrator | 2025-05-30 01:25:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:22.430249 | orchestrator | 2025-05-30 01:25:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:22.430348 | orchestrator | 2025-05-30 01:25:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:25.476943 | orchestrator | 2025-05-30 01:25:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:25.477050 | orchestrator | 2025-05-30 01:25:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:28.532220 | orchestrator | 2025-05-30 01:25:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:28.532334 | orchestrator | 2025-05-30 01:25:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:31.587647 | orchestrator | 2025-05-30 01:25:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:31.587756 | orchestrator | 2025-05-30 01:25:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:34.642547 | orchestrator | 2025-05-30 01:25:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:34.642656 | orchestrator | 2025-05-30 01:25:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:37.695774 | orchestrator | 2025-05-30 01:25:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:37.695924 | orchestrator | 2025-05-30 01:25:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:40.743622 | orchestrator | 2025-05-30 01:25:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:40.743729 | orchestrator | 2025-05-30 01:25:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:43.798476 | orchestrator | 2025-05-30 01:25:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:43.798582 | orchestrator | 2025-05-30 01:25:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:46.855620 | orchestrator | 2025-05-30 01:25:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:46.855733 | orchestrator | 2025-05-30 01:25:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:49.906688 | orchestrator | 2025-05-30 01:25:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:49.906862 | orchestrator | 2025-05-30 01:25:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:52.961429 | orchestrator | 2025-05-30 01:25:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:52.961504 | orchestrator | 2025-05-30 01:25:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:56.019397 | orchestrator | 2025-05-30 01:25:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:56.019491 | orchestrator | 2025-05-30 01:25:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:25:59.076449 | orchestrator | 2025-05-30 01:25:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:25:59.076565 | orchestrator | 2025-05-30 01:25:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:02.125143 | orchestrator | 2025-05-30 01:26:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:02.125251 | orchestrator | 2025-05-30 01:26:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:05.173025 | orchestrator | 2025-05-30 01:26:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:05.173135 | orchestrator | 2025-05-30 01:26:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:08.243912 | orchestrator | 2025-05-30 01:26:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:08.244028 | orchestrator | 2025-05-30 01:26:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:11.301770 | orchestrator | 2025-05-30 01:26:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:11.301939 | orchestrator | 2025-05-30 01:26:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:14.365366 | orchestrator | 2025-05-30 01:26:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:14.365476 | orchestrator | 2025-05-30 01:26:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:17.415738 | orchestrator | 2025-05-30 01:26:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:17.415887 | orchestrator | 2025-05-30 01:26:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:20.477765 | orchestrator | 2025-05-30 01:26:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:20.478519 | orchestrator | 2025-05-30 01:26:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:23.535263 | orchestrator | 2025-05-30 01:26:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:23.535374 | orchestrator | 2025-05-30 01:26:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:26.584375 | orchestrator | 2025-05-30 01:26:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:26.584483 | orchestrator | 2025-05-30 01:26:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:29.640592 | orchestrator | 2025-05-30 01:26:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:29.640714 | orchestrator | 2025-05-30 01:26:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:32.697725 | orchestrator | 2025-05-30 01:26:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:32.697866 | orchestrator | 2025-05-30 01:26:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:35.753227 | orchestrator | 2025-05-30 01:26:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:35.753338 | orchestrator | 2025-05-30 01:26:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:38.804378 | orchestrator | 2025-05-30 01:26:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:38.804485 | orchestrator | 2025-05-30 01:26:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:41.855406 | orchestrator | 2025-05-30 01:26:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:41.855514 | orchestrator | 2025-05-30 01:26:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:44.901912 | orchestrator | 2025-05-30 01:26:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:44.902073 | orchestrator | 2025-05-30 01:26:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:47.955339 | orchestrator | 2025-05-30 01:26:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:47.955449 | orchestrator | 2025-05-30 01:26:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:51.007089 | orchestrator | 2025-05-30 01:26:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:51.007173 | orchestrator | 2025-05-30 01:26:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:54.065922 | orchestrator | 2025-05-30 01:26:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:54.066070 | orchestrator | 2025-05-30 01:26:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:26:57.126699 | orchestrator | 2025-05-30 01:26:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:26:57.126861 | orchestrator | 2025-05-30 01:26:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:00.179008 | orchestrator | 2025-05-30 01:27:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:00.179141 | orchestrator | 2025-05-30 01:27:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:03.233050 | orchestrator | 2025-05-30 01:27:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:03.233140 | orchestrator | 2025-05-30 01:27:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:06.286098 | orchestrator | 2025-05-30 01:27:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:06.286205 | orchestrator | 2025-05-30 01:27:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:09.341894 | orchestrator | 2025-05-30 01:27:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:09.342007 | orchestrator | 2025-05-30 01:27:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:12.395160 | orchestrator | 2025-05-30 01:27:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:12.395249 | orchestrator | 2025-05-30 01:27:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:15.437004 | orchestrator | 2025-05-30 01:27:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:15.437089 | orchestrator | 2025-05-30 01:27:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:18.480771 | orchestrator | 2025-05-30 01:27:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:18.480935 | orchestrator | 2025-05-30 01:27:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:21.532030 | orchestrator | 2025-05-30 01:27:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:21.532138 | orchestrator | 2025-05-30 01:27:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:24.582893 | orchestrator | 2025-05-30 01:27:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:24.582992 | orchestrator | 2025-05-30 01:27:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:27.638210 | orchestrator | 2025-05-30 01:27:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:27.639025 | orchestrator | 2025-05-30 01:27:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:30.692995 | orchestrator | 2025-05-30 01:27:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:30.693112 | orchestrator | 2025-05-30 01:27:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:33.749837 | orchestrator | 2025-05-30 01:27:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:33.749945 | orchestrator | 2025-05-30 01:27:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:36.802677 | orchestrator | 2025-05-30 01:27:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:36.802829 | orchestrator | 2025-05-30 01:27:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:39.856272 | orchestrator | 2025-05-30 01:27:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:39.856383 | orchestrator | 2025-05-30 01:27:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:42.912710 | orchestrator | 2025-05-30 01:27:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:42.912869 | orchestrator | 2025-05-30 01:27:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:45.964190 | orchestrator | 2025-05-30 01:27:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:45.964300 | orchestrator | 2025-05-30 01:27:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:49.016943 | orchestrator | 2025-05-30 01:27:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:49.017054 | orchestrator | 2025-05-30 01:27:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:52.077779 | orchestrator | 2025-05-30 01:27:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:52.077968 | orchestrator | 2025-05-30 01:27:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:55.130192 | orchestrator | 2025-05-30 01:27:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:55.130299 | orchestrator | 2025-05-30 01:27:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:27:58.180200 | orchestrator | 2025-05-30 01:27:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:27:58.180311 | orchestrator | 2025-05-30 01:27:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:01.226345 | orchestrator | 2025-05-30 01:28:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:01.226468 | orchestrator | 2025-05-30 01:28:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:04.277729 | orchestrator | 2025-05-30 01:28:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:04.277916 | orchestrator | 2025-05-30 01:28:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:07.334885 | orchestrator | 2025-05-30 01:28:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:07.334996 | orchestrator | 2025-05-30 01:28:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:10.381032 | orchestrator | 2025-05-30 01:28:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:10.381112 | orchestrator | 2025-05-30 01:28:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:13.434354 | orchestrator | 2025-05-30 01:28:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:13.434445 | orchestrator | 2025-05-30 01:28:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:16.480298 | orchestrator | 2025-05-30 01:28:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:16.480387 | orchestrator | 2025-05-30 01:28:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:19.525845 | orchestrator | 2025-05-30 01:28:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:19.525953 | orchestrator | 2025-05-30 01:28:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:22.575511 | orchestrator | 2025-05-30 01:28:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:22.575597 | orchestrator | 2025-05-30 01:28:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:25.626482 | orchestrator | 2025-05-30 01:28:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:25.626586 | orchestrator | 2025-05-30 01:28:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:28.680173 | orchestrator | 2025-05-30 01:28:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:28.680306 | orchestrator | 2025-05-30 01:28:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:31.731219 | orchestrator | 2025-05-30 01:28:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:31.731315 | orchestrator | 2025-05-30 01:28:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:34.778120 | orchestrator | 2025-05-30 01:28:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:34.778228 | orchestrator | 2025-05-30 01:28:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:37.828417 | orchestrator | 2025-05-30 01:28:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:37.828542 | orchestrator | 2025-05-30 01:28:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:40.876752 | orchestrator | 2025-05-30 01:28:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:40.876906 | orchestrator | 2025-05-30 01:28:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:43.924140 | orchestrator | 2025-05-30 01:28:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:43.924255 | orchestrator | 2025-05-30 01:28:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:46.973652 | orchestrator | 2025-05-30 01:28:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:46.973752 | orchestrator | 2025-05-30 01:28:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:50.025606 | orchestrator | 2025-05-30 01:28:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:50.025698 | orchestrator | 2025-05-30 01:28:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:53.077003 | orchestrator | 2025-05-30 01:28:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:53.077117 | orchestrator | 2025-05-30 01:28:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:56.132872 | orchestrator | 2025-05-30 01:28:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:56.132983 | orchestrator | 2025-05-30 01:28:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:28:59.178197 | orchestrator | 2025-05-30 01:28:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:28:59.178300 | orchestrator | 2025-05-30 01:28:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:02.219071 | orchestrator | 2025-05-30 01:29:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:02.219181 | orchestrator | 2025-05-30 01:29:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:05.264272 | orchestrator | 2025-05-30 01:29:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:05.264378 | orchestrator | 2025-05-30 01:29:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:08.309306 | orchestrator | 2025-05-30 01:29:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:08.309400 | orchestrator | 2025-05-30 01:29:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:11.355239 | orchestrator | 2025-05-30 01:29:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:11.355325 | orchestrator | 2025-05-30 01:29:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:14.411394 | orchestrator | 2025-05-30 01:29:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:14.411523 | orchestrator | 2025-05-30 01:29:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:17.455321 | orchestrator | 2025-05-30 01:29:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:17.455425 | orchestrator | 2025-05-30 01:29:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:20.506401 | orchestrator | 2025-05-30 01:29:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:20.506512 | orchestrator | 2025-05-30 01:29:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:23.557942 | orchestrator | 2025-05-30 01:29:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:23.558096 | orchestrator | 2025-05-30 01:29:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:26.607248 | orchestrator | 2025-05-30 01:29:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:26.607357 | orchestrator | 2025-05-30 01:29:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:29.660210 | orchestrator | 2025-05-30 01:29:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:29.660316 | orchestrator | 2025-05-30 01:29:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:32.707613 | orchestrator | 2025-05-30 01:29:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:32.707721 | orchestrator | 2025-05-30 01:29:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:35.759663 | orchestrator | 2025-05-30 01:29:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:35.759769 | orchestrator | 2025-05-30 01:29:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:38.812327 | orchestrator | 2025-05-30 01:29:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:38.812427 | orchestrator | 2025-05-30 01:29:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:41.866176 | orchestrator | 2025-05-30 01:29:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:41.866275 | orchestrator | 2025-05-30 01:29:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:44.919045 | orchestrator | 2025-05-30 01:29:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:44.919159 | orchestrator | 2025-05-30 01:29:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:47.972323 | orchestrator | 2025-05-30 01:29:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:47.972431 | orchestrator | 2025-05-30 01:29:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:51.020348 | orchestrator | 2025-05-30 01:29:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:51.020456 | orchestrator | 2025-05-30 01:29:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:54.072123 | orchestrator | 2025-05-30 01:29:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:54.072257 | orchestrator | 2025-05-30 01:29:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:29:57.121282 | orchestrator | 2025-05-30 01:29:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:29:57.121378 | orchestrator | 2025-05-30 01:29:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:00.171773 | orchestrator | 2025-05-30 01:30:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:00.171972 | orchestrator | 2025-05-30 01:30:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:03.221691 | orchestrator | 2025-05-30 01:30:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:03.221846 | orchestrator | 2025-05-30 01:30:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:06.272232 | orchestrator | 2025-05-30 01:30:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:06.272325 | orchestrator | 2025-05-30 01:30:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:09.327020 | orchestrator | 2025-05-30 01:30:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:09.327105 | orchestrator | 2025-05-30 01:30:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:12.380892 | orchestrator | 2025-05-30 01:30:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:12.380987 | orchestrator | 2025-05-30 01:30:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:15.438647 | orchestrator | 2025-05-30 01:30:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:15.438760 | orchestrator | 2025-05-30 01:30:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:18.487881 | orchestrator | 2025-05-30 01:30:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:18.487986 | orchestrator | 2025-05-30 01:30:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:21.536927 | orchestrator | 2025-05-30 01:30:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:21.537036 | orchestrator | 2025-05-30 01:30:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:24.589141 | orchestrator | 2025-05-30 01:30:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:24.589253 | orchestrator | 2025-05-30 01:30:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:27.632434 | orchestrator | 2025-05-30 01:30:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:27.632548 | orchestrator | 2025-05-30 01:30:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:30.685519 | orchestrator | 2025-05-30 01:30:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:30.685593 | orchestrator | 2025-05-30 01:30:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:33.730000 | orchestrator | 2025-05-30 01:30:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:33.730148 | orchestrator | 2025-05-30 01:30:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:36.777703 | orchestrator | 2025-05-30 01:30:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:36.777877 | orchestrator | 2025-05-30 01:30:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:39.832155 | orchestrator | 2025-05-30 01:30:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:39.832265 | orchestrator | 2025-05-30 01:30:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:42.877685 | orchestrator | 2025-05-30 01:30:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:42.877830 | orchestrator | 2025-05-30 01:30:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:45.927736 | orchestrator | 2025-05-30 01:30:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:45.927934 | orchestrator | 2025-05-30 01:30:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:48.964414 | orchestrator | 2025-05-30 01:30:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:48.964520 | orchestrator | 2025-05-30 01:30:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:52.019196 | orchestrator | 2025-05-30 01:30:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:52.019300 | orchestrator | 2025-05-30 01:30:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:55.065067 | orchestrator | 2025-05-30 01:30:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:55.065183 | orchestrator | 2025-05-30 01:30:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:30:58.113232 | orchestrator | 2025-05-30 01:30:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:30:58.113342 | orchestrator | 2025-05-30 01:30:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:01.161322 | orchestrator | 2025-05-30 01:31:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:01.161395 | orchestrator | 2025-05-30 01:31:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:04.212721 | orchestrator | 2025-05-30 01:31:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:04.212907 | orchestrator | 2025-05-30 01:31:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:07.263336 | orchestrator | 2025-05-30 01:31:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:07.263438 | orchestrator | 2025-05-30 01:31:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:10.313899 | orchestrator | 2025-05-30 01:31:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:10.313992 | orchestrator | 2025-05-30 01:31:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:13.371643 | orchestrator | 2025-05-30 01:31:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:13.371820 | orchestrator | 2025-05-30 01:31:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:16.418905 | orchestrator | 2025-05-30 01:31:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:16.419017 | orchestrator | 2025-05-30 01:31:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:19.467940 | orchestrator | 2025-05-30 01:31:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:19.468048 | orchestrator | 2025-05-30 01:31:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:22.520327 | orchestrator | 2025-05-30 01:31:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:22.520436 | orchestrator | 2025-05-30 01:31:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:25.569949 | orchestrator | 2025-05-30 01:31:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:25.570153 | orchestrator | 2025-05-30 01:31:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:28.620903 | orchestrator | 2025-05-30 01:31:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:28.621019 | orchestrator | 2025-05-30 01:31:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:31.674549 | orchestrator | 2025-05-30 01:31:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:31.674671 | orchestrator | 2025-05-30 01:31:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:34.722845 | orchestrator | 2025-05-30 01:31:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:34.722938 | orchestrator | 2025-05-30 01:31:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:37.778312 | orchestrator | 2025-05-30 01:31:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:37.778421 | orchestrator | 2025-05-30 01:31:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:40.831546 | orchestrator | 2025-05-30 01:31:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:40.831661 | orchestrator | 2025-05-30 01:31:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:43.885683 | orchestrator | 2025-05-30 01:31:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:43.885842 | orchestrator | 2025-05-30 01:31:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:46.925674 | orchestrator | 2025-05-30 01:31:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:46.925831 | orchestrator | 2025-05-30 01:31:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:49.976332 | orchestrator | 2025-05-30 01:31:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:49.976438 | orchestrator | 2025-05-30 01:31:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:53.040903 | orchestrator | 2025-05-30 01:31:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:53.041012 | orchestrator | 2025-05-30 01:31:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:56.091081 | orchestrator | 2025-05-30 01:31:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:56.091191 | orchestrator | 2025-05-30 01:31:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:31:59.145613 | orchestrator | 2025-05-30 01:31:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:31:59.145720 | orchestrator | 2025-05-30 01:31:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:02.196535 | orchestrator | 2025-05-30 01:32:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:02.196643 | orchestrator | 2025-05-30 01:32:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:05.247953 | orchestrator | 2025-05-30 01:32:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:05.248082 | orchestrator | 2025-05-30 01:32:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:08.292912 | orchestrator | 2025-05-30 01:32:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:08.293004 | orchestrator | 2025-05-30 01:32:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:11.340588 | orchestrator | 2025-05-30 01:32:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:11.340698 | orchestrator | 2025-05-30 01:32:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:14.392856 | orchestrator | 2025-05-30 01:32:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:14.392968 | orchestrator | 2025-05-30 01:32:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:17.444913 | orchestrator | 2025-05-30 01:32:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:17.445050 | orchestrator | 2025-05-30 01:32:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:20.498997 | orchestrator | 2025-05-30 01:32:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:20.499109 | orchestrator | 2025-05-30 01:32:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:23.551420 | orchestrator | 2025-05-30 01:32:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:23.551550 | orchestrator | 2025-05-30 01:32:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:26.603077 | orchestrator | 2025-05-30 01:32:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:26.603173 | orchestrator | 2025-05-30 01:32:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:29.654335 | orchestrator | 2025-05-30 01:32:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:29.654449 | orchestrator | 2025-05-30 01:32:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:32.698368 | orchestrator | 2025-05-30 01:32:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:32.698474 | orchestrator | 2025-05-30 01:32:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:35.744474 | orchestrator | 2025-05-30 01:32:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:35.744584 | orchestrator | 2025-05-30 01:32:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:38.790128 | orchestrator | 2025-05-30 01:32:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:38.790243 | orchestrator | 2025-05-30 01:32:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:41.834485 | orchestrator | 2025-05-30 01:32:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:41.834607 | orchestrator | 2025-05-30 01:32:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:44.886225 | orchestrator | 2025-05-30 01:32:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:44.886318 | orchestrator | 2025-05-30 01:32:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:47.939807 | orchestrator | 2025-05-30 01:32:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:47.939922 | orchestrator | 2025-05-30 01:32:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:50.998667 | orchestrator | 2025-05-30 01:32:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:50.998742 | orchestrator | 2025-05-30 01:32:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:54.057935 | orchestrator | 2025-05-30 01:32:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:54.058082 | orchestrator | 2025-05-30 01:32:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:32:57.106373 | orchestrator | 2025-05-30 01:32:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:32:57.106486 | orchestrator | 2025-05-30 01:32:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:00.159076 | orchestrator | 2025-05-30 01:33:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:00.159185 | orchestrator | 2025-05-30 01:33:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:03.214109 | orchestrator | 2025-05-30 01:33:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:03.214297 | orchestrator | 2025-05-30 01:33:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:06.264865 | orchestrator | 2025-05-30 01:33:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:06.264958 | orchestrator | 2025-05-30 01:33:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:09.318087 | orchestrator | 2025-05-30 01:33:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:09.318200 | orchestrator | 2025-05-30 01:33:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:12.363618 | orchestrator | 2025-05-30 01:33:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:12.363724 | orchestrator | 2025-05-30 01:33:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:15.420713 | orchestrator | 2025-05-30 01:33:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:15.420849 | orchestrator | 2025-05-30 01:33:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:18.479970 | orchestrator | 2025-05-30 01:33:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:18.480084 | orchestrator | 2025-05-30 01:33:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:21.537621 | orchestrator | 2025-05-30 01:33:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:21.537716 | orchestrator | 2025-05-30 01:33:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:24.592118 | orchestrator | 2025-05-30 01:33:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:24.592233 | orchestrator | 2025-05-30 01:33:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:27.650668 | orchestrator | 2025-05-30 01:33:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:27.650854 | orchestrator | 2025-05-30 01:33:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:30.700456 | orchestrator | 2025-05-30 01:33:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:30.700562 | orchestrator | 2025-05-30 01:33:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:33.758278 | orchestrator | 2025-05-30 01:33:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:33.758380 | orchestrator | 2025-05-30 01:33:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:36.814125 | orchestrator | 2025-05-30 01:33:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:36.814230 | orchestrator | 2025-05-30 01:33:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:39.870723 | orchestrator | 2025-05-30 01:33:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:39.870903 | orchestrator | 2025-05-30 01:33:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:42.918889 | orchestrator | 2025-05-30 01:33:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:42.918999 | orchestrator | 2025-05-30 01:33:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:45.965034 | orchestrator | 2025-05-30 01:33:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:45.965143 | orchestrator | 2025-05-30 01:33:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:49.023363 | orchestrator | 2025-05-30 01:33:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:49.023469 | orchestrator | 2025-05-30 01:33:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:52.080337 | orchestrator | 2025-05-30 01:33:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:52.080446 | orchestrator | 2025-05-30 01:33:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:55.130580 | orchestrator | 2025-05-30 01:33:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:55.130689 | orchestrator | 2025-05-30 01:33:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:33:58.203628 | orchestrator | 2025-05-30 01:33:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:33:58.203729 | orchestrator | 2025-05-30 01:33:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:01.257430 | orchestrator | 2025-05-30 01:34:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:01.257575 | orchestrator | 2025-05-30 01:34:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:04.303581 | orchestrator | 2025-05-30 01:34:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:04.303687 | orchestrator | 2025-05-30 01:34:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:07.358260 | orchestrator | 2025-05-30 01:34:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:07.358408 | orchestrator | 2025-05-30 01:34:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:10.407001 | orchestrator | 2025-05-30 01:34:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:10.407138 | orchestrator | 2025-05-30 01:34:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:13.463406 | orchestrator | 2025-05-30 01:34:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:13.463592 | orchestrator | 2025-05-30 01:34:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:16.530932 | orchestrator | 2025-05-30 01:34:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:16.531037 | orchestrator | 2025-05-30 01:34:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:19.578242 | orchestrator | 2025-05-30 01:34:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:19.578363 | orchestrator | 2025-05-30 01:34:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:22.634671 | orchestrator | 2025-05-30 01:34:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:22.634773 | orchestrator | 2025-05-30 01:34:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:25.692221 | orchestrator | 2025-05-30 01:34:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:25.692336 | orchestrator | 2025-05-30 01:34:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:28.753306 | orchestrator | 2025-05-30 01:34:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:28.756058 | orchestrator | 2025-05-30 01:34:28 | INFO  | Task 03dcb3d4-0418-4c1f-9f1a-8b415148fa52 is in state STARTED 2025-05-30 01:34:28.756102 | orchestrator | 2025-05-30 01:34:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:31.824138 | orchestrator | 2025-05-30 01:34:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:31.826655 | orchestrator | 2025-05-30 01:34:31 | INFO  | Task 03dcb3d4-0418-4c1f-9f1a-8b415148fa52 is in state STARTED 2025-05-30 01:34:31.826690 | orchestrator | 2025-05-30 01:34:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:34.890850 | orchestrator | 2025-05-30 01:34:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:34.891955 | orchestrator | 2025-05-30 01:34:34 | INFO  | Task 03dcb3d4-0418-4c1f-9f1a-8b415148fa52 is in state STARTED 2025-05-30 01:34:34.892209 | orchestrator | 2025-05-30 01:34:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:37.941562 | orchestrator | 2025-05-30 01:34:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:37.942307 | orchestrator | 2025-05-30 01:34:37 | INFO  | Task 03dcb3d4-0418-4c1f-9f1a-8b415148fa52 is in state SUCCESS 2025-05-30 01:34:37.942493 | orchestrator | 2025-05-30 01:34:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:40.997334 | orchestrator | 2025-05-30 01:34:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:40.997423 | orchestrator | 2025-05-30 01:34:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:44.053778 | orchestrator | 2025-05-30 01:34:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:44.053962 | orchestrator | 2025-05-30 01:34:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:47.103748 | orchestrator | 2025-05-30 01:34:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:47.103851 | orchestrator | 2025-05-30 01:34:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:50.154753 | orchestrator | 2025-05-30 01:34:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:50.154869 | orchestrator | 2025-05-30 01:34:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:53.200636 | orchestrator | 2025-05-30 01:34:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:53.200733 | orchestrator | 2025-05-30 01:34:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:56.258573 | orchestrator | 2025-05-30 01:34:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:56.258646 | orchestrator | 2025-05-30 01:34:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:34:59.307273 | orchestrator | 2025-05-30 01:34:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:34:59.307400 | orchestrator | 2025-05-30 01:34:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:02.362954 | orchestrator | 2025-05-30 01:35:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:02.363067 | orchestrator | 2025-05-30 01:35:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:05.409053 | orchestrator | 2025-05-30 01:35:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:05.409155 | orchestrator | 2025-05-30 01:35:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:08.455640 | orchestrator | 2025-05-30 01:35:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:08.455755 | orchestrator | 2025-05-30 01:35:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:11.508634 | orchestrator | 2025-05-30 01:35:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:11.508743 | orchestrator | 2025-05-30 01:35:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:14.554476 | orchestrator | 2025-05-30 01:35:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:14.554591 | orchestrator | 2025-05-30 01:35:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:17.603996 | orchestrator | 2025-05-30 01:35:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:17.604082 | orchestrator | 2025-05-30 01:35:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:20.655469 | orchestrator | 2025-05-30 01:35:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:20.655582 | orchestrator | 2025-05-30 01:35:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:23.695600 | orchestrator | 2025-05-30 01:35:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:23.695675 | orchestrator | 2025-05-30 01:35:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:26.739559 | orchestrator | 2025-05-30 01:35:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:26.739671 | orchestrator | 2025-05-30 01:35:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:29.789665 | orchestrator | 2025-05-30 01:35:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:29.789764 | orchestrator | 2025-05-30 01:35:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:32.845453 | orchestrator | 2025-05-30 01:35:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:32.845641 | orchestrator | 2025-05-30 01:35:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:35.898324 | orchestrator | 2025-05-30 01:35:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:35.898426 | orchestrator | 2025-05-30 01:35:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:38.948606 | orchestrator | 2025-05-30 01:35:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:38.948705 | orchestrator | 2025-05-30 01:35:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:42.002080 | orchestrator | 2025-05-30 01:35:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:42.002197 | orchestrator | 2025-05-30 01:35:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:45.046171 | orchestrator | 2025-05-30 01:35:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:45.046287 | orchestrator | 2025-05-30 01:35:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:48.096575 | orchestrator | 2025-05-30 01:35:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:48.096671 | orchestrator | 2025-05-30 01:35:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:51.140270 | orchestrator | 2025-05-30 01:35:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:51.140390 | orchestrator | 2025-05-30 01:35:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:54.190705 | orchestrator | 2025-05-30 01:35:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:54.190807 | orchestrator | 2025-05-30 01:35:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:35:57.243493 | orchestrator | 2025-05-30 01:35:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:35:57.243615 | orchestrator | 2025-05-30 01:35:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:00.292267 | orchestrator | 2025-05-30 01:36:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:00.292373 | orchestrator | 2025-05-30 01:36:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:03.334255 | orchestrator | 2025-05-30 01:36:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:03.334447 | orchestrator | 2025-05-30 01:36:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:06.383015 | orchestrator | 2025-05-30 01:36:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:06.383125 | orchestrator | 2025-05-30 01:36:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:09.428569 | orchestrator | 2025-05-30 01:36:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:09.428689 | orchestrator | 2025-05-30 01:36:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:12.480096 | orchestrator | 2025-05-30 01:36:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:12.480206 | orchestrator | 2025-05-30 01:36:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:15.541165 | orchestrator | 2025-05-30 01:36:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:15.541286 | orchestrator | 2025-05-30 01:36:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:18.594995 | orchestrator | 2025-05-30 01:36:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:18.595107 | orchestrator | 2025-05-30 01:36:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:21.644698 | orchestrator | 2025-05-30 01:36:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:21.645515 | orchestrator | 2025-05-30 01:36:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:24.693611 | orchestrator | 2025-05-30 01:36:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:24.693713 | orchestrator | 2025-05-30 01:36:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:27.740901 | orchestrator | 2025-05-30 01:36:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:27.741042 | orchestrator | 2025-05-30 01:36:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:30.789360 | orchestrator | 2025-05-30 01:36:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:30.789450 | orchestrator | 2025-05-30 01:36:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:33.839251 | orchestrator | 2025-05-30 01:36:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:33.839412 | orchestrator | 2025-05-30 01:36:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:36.893420 | orchestrator | 2025-05-30 01:36:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:36.893511 | orchestrator | 2025-05-30 01:36:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:39.940315 | orchestrator | 2025-05-30 01:36:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:39.940505 | orchestrator | 2025-05-30 01:36:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:42.996272 | orchestrator | 2025-05-30 01:36:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:42.996381 | orchestrator | 2025-05-30 01:36:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:46.052437 | orchestrator | 2025-05-30 01:36:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:46.052547 | orchestrator | 2025-05-30 01:36:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:49.104136 | orchestrator | 2025-05-30 01:36:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:49.104242 | orchestrator | 2025-05-30 01:36:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:52.151015 | orchestrator | 2025-05-30 01:36:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:52.151131 | orchestrator | 2025-05-30 01:36:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:55.196688 | orchestrator | 2025-05-30 01:36:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:55.196799 | orchestrator | 2025-05-30 01:36:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:36:58.245336 | orchestrator | 2025-05-30 01:36:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:36:58.245453 | orchestrator | 2025-05-30 01:36:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:01.290373 | orchestrator | 2025-05-30 01:37:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:01.290517 | orchestrator | 2025-05-30 01:37:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:04.344256 | orchestrator | 2025-05-30 01:37:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:04.344376 | orchestrator | 2025-05-30 01:37:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:07.393916 | orchestrator | 2025-05-30 01:37:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:07.394165 | orchestrator | 2025-05-30 01:37:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:10.441144 | orchestrator | 2025-05-30 01:37:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:10.441256 | orchestrator | 2025-05-30 01:37:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:13.487122 | orchestrator | 2025-05-30 01:37:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:13.487212 | orchestrator | 2025-05-30 01:37:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:16.533347 | orchestrator | 2025-05-30 01:37:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:16.533455 | orchestrator | 2025-05-30 01:37:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:19.588938 | orchestrator | 2025-05-30 01:37:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:19.589071 | orchestrator | 2025-05-30 01:37:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:22.639103 | orchestrator | 2025-05-30 01:37:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:22.639199 | orchestrator | 2025-05-30 01:37:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:25.690819 | orchestrator | 2025-05-30 01:37:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:25.690935 | orchestrator | 2025-05-30 01:37:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:28.737280 | orchestrator | 2025-05-30 01:37:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:28.737392 | orchestrator | 2025-05-30 01:37:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:31.787023 | orchestrator | 2025-05-30 01:37:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:31.787113 | orchestrator | 2025-05-30 01:37:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:34.832560 | orchestrator | 2025-05-30 01:37:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:34.832653 | orchestrator | 2025-05-30 01:37:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:37.882274 | orchestrator | 2025-05-30 01:37:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:37.882392 | orchestrator | 2025-05-30 01:37:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:40.924718 | orchestrator | 2025-05-30 01:37:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:40.924909 | orchestrator | 2025-05-30 01:37:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:43.971196 | orchestrator | 2025-05-30 01:37:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:43.971303 | orchestrator | 2025-05-30 01:37:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:47.023422 | orchestrator | 2025-05-30 01:37:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:47.023536 | orchestrator | 2025-05-30 01:37:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:50.070763 | orchestrator | 2025-05-30 01:37:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:50.070866 | orchestrator | 2025-05-30 01:37:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:53.121921 | orchestrator | 2025-05-30 01:37:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:53.122103 | orchestrator | 2025-05-30 01:37:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:56.160485 | orchestrator | 2025-05-30 01:37:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:56.160591 | orchestrator | 2025-05-30 01:37:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:37:59.208364 | orchestrator | 2025-05-30 01:37:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:37:59.208476 | orchestrator | 2025-05-30 01:37:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:02.263748 | orchestrator | 2025-05-30 01:38:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:02.263848 | orchestrator | 2025-05-30 01:38:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:05.314089 | orchestrator | 2025-05-30 01:38:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:05.314207 | orchestrator | 2025-05-30 01:38:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:08.377461 | orchestrator | 2025-05-30 01:38:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:08.377523 | orchestrator | 2025-05-30 01:38:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:11.419120 | orchestrator | 2025-05-30 01:38:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:11.419217 | orchestrator | 2025-05-30 01:38:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:14.469795 | orchestrator | 2025-05-30 01:38:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:14.469894 | orchestrator | 2025-05-30 01:38:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:17.522349 | orchestrator | 2025-05-30 01:38:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:17.522538 | orchestrator | 2025-05-30 01:38:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:20.567494 | orchestrator | 2025-05-30 01:38:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:20.567597 | orchestrator | 2025-05-30 01:38:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:23.608488 | orchestrator | 2025-05-30 01:38:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:23.608606 | orchestrator | 2025-05-30 01:38:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:26.655547 | orchestrator | 2025-05-30 01:38:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:26.655674 | orchestrator | 2025-05-30 01:38:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:29.712335 | orchestrator | 2025-05-30 01:38:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:29.712441 | orchestrator | 2025-05-30 01:38:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:32.761802 | orchestrator | 2025-05-30 01:38:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:32.761877 | orchestrator | 2025-05-30 01:38:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:35.812538 | orchestrator | 2025-05-30 01:38:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:35.812667 | orchestrator | 2025-05-30 01:38:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:38.859171 | orchestrator | 2025-05-30 01:38:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:38.859273 | orchestrator | 2025-05-30 01:38:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:41.910130 | orchestrator | 2025-05-30 01:38:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:41.910245 | orchestrator | 2025-05-30 01:38:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:44.958515 | orchestrator | 2025-05-30 01:38:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:44.958623 | orchestrator | 2025-05-30 01:38:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:48.015230 | orchestrator | 2025-05-30 01:38:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:48.015345 | orchestrator | 2025-05-30 01:38:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:51.063366 | orchestrator | 2025-05-30 01:38:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:51.063487 | orchestrator | 2025-05-30 01:38:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:54.115229 | orchestrator | 2025-05-30 01:38:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:54.115340 | orchestrator | 2025-05-30 01:38:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:38:57.164614 | orchestrator | 2025-05-30 01:38:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:38:57.164726 | orchestrator | 2025-05-30 01:38:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:00.216172 | orchestrator | 2025-05-30 01:39:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:00.216295 | orchestrator | 2025-05-30 01:39:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:03.266099 | orchestrator | 2025-05-30 01:39:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:03.266222 | orchestrator | 2025-05-30 01:39:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:06.320078 | orchestrator | 2025-05-30 01:39:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:06.320187 | orchestrator | 2025-05-30 01:39:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:09.372553 | orchestrator | 2025-05-30 01:39:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:09.372658 | orchestrator | 2025-05-30 01:39:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:12.418852 | orchestrator | 2025-05-30 01:39:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:12.419003 | orchestrator | 2025-05-30 01:39:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:15.470600 | orchestrator | 2025-05-30 01:39:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:15.470702 | orchestrator | 2025-05-30 01:39:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:18.522700 | orchestrator | 2025-05-30 01:39:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:18.522832 | orchestrator | 2025-05-30 01:39:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:21.586683 | orchestrator | 2025-05-30 01:39:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:21.586784 | orchestrator | 2025-05-30 01:39:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:24.643916 | orchestrator | 2025-05-30 01:39:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:24.644069 | orchestrator | 2025-05-30 01:39:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:27.691323 | orchestrator | 2025-05-30 01:39:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:27.691430 | orchestrator | 2025-05-30 01:39:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:30.752352 | orchestrator | 2025-05-30 01:39:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:30.752449 | orchestrator | 2025-05-30 01:39:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:33.801065 | orchestrator | 2025-05-30 01:39:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:33.801152 | orchestrator | 2025-05-30 01:39:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:36.852683 | orchestrator | 2025-05-30 01:39:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:36.852795 | orchestrator | 2025-05-30 01:39:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:39.903366 | orchestrator | 2025-05-30 01:39:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:39.903495 | orchestrator | 2025-05-30 01:39:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:42.953155 | orchestrator | 2025-05-30 01:39:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:42.953253 | orchestrator | 2025-05-30 01:39:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:46.004017 | orchestrator | 2025-05-30 01:39:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:46.004134 | orchestrator | 2025-05-30 01:39:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:49.053219 | orchestrator | 2025-05-30 01:39:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:49.053331 | orchestrator | 2025-05-30 01:39:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:52.108478 | orchestrator | 2025-05-30 01:39:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:52.108581 | orchestrator | 2025-05-30 01:39:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:55.150419 | orchestrator | 2025-05-30 01:39:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:55.150513 | orchestrator | 2025-05-30 01:39:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:39:58.204205 | orchestrator | 2025-05-30 01:39:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:39:58.204310 | orchestrator | 2025-05-30 01:39:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:01.250430 | orchestrator | 2025-05-30 01:40:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:01.250574 | orchestrator | 2025-05-30 01:40:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:04.302711 | orchestrator | 2025-05-30 01:40:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:04.302818 | orchestrator | 2025-05-30 01:40:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:07.357422 | orchestrator | 2025-05-30 01:40:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:07.357540 | orchestrator | 2025-05-30 01:40:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:10.409196 | orchestrator | 2025-05-30 01:40:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:10.409295 | orchestrator | 2025-05-30 01:40:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:13.451611 | orchestrator | 2025-05-30 01:40:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:13.451721 | orchestrator | 2025-05-30 01:40:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:16.500793 | orchestrator | 2025-05-30 01:40:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:16.500898 | orchestrator | 2025-05-30 01:40:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:19.556124 | orchestrator | 2025-05-30 01:40:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:19.556270 | orchestrator | 2025-05-30 01:40:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:22.609504 | orchestrator | 2025-05-30 01:40:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:22.609608 | orchestrator | 2025-05-30 01:40:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:25.673388 | orchestrator | 2025-05-30 01:40:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:25.673520 | orchestrator | 2025-05-30 01:40:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:28.762539 | orchestrator | 2025-05-30 01:40:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:28.762628 | orchestrator | 2025-05-30 01:40:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:31.810277 | orchestrator | 2025-05-30 01:40:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:31.810400 | orchestrator | 2025-05-30 01:40:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:34.854572 | orchestrator | 2025-05-30 01:40:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:34.854674 | orchestrator | 2025-05-30 01:40:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:37.922352 | orchestrator | 2025-05-30 01:40:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:37.922460 | orchestrator | 2025-05-30 01:40:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:40.982616 | orchestrator | 2025-05-30 01:40:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:40.982719 | orchestrator | 2025-05-30 01:40:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:44.033112 | orchestrator | 2025-05-30 01:40:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:44.033228 | orchestrator | 2025-05-30 01:40:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:47.087381 | orchestrator | 2025-05-30 01:40:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:47.087468 | orchestrator | 2025-05-30 01:40:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:50.139589 | orchestrator | 2025-05-30 01:40:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:50.139687 | orchestrator | 2025-05-30 01:40:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:53.180326 | orchestrator | 2025-05-30 01:40:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:53.180414 | orchestrator | 2025-05-30 01:40:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:56.232161 | orchestrator | 2025-05-30 01:40:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:56.232246 | orchestrator | 2025-05-30 01:40:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:40:59.288813 | orchestrator | 2025-05-30 01:40:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:40:59.288918 | orchestrator | 2025-05-30 01:40:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:02.343719 | orchestrator | 2025-05-30 01:41:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:02.343827 | orchestrator | 2025-05-30 01:41:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:05.389457 | orchestrator | 2025-05-30 01:41:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:05.389562 | orchestrator | 2025-05-30 01:41:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:08.443499 | orchestrator | 2025-05-30 01:41:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:08.443626 | orchestrator | 2025-05-30 01:41:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:11.495831 | orchestrator | 2025-05-30 01:41:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:11.495932 | orchestrator | 2025-05-30 01:41:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:14.543681 | orchestrator | 2025-05-30 01:41:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:14.543845 | orchestrator | 2025-05-30 01:41:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:17.595664 | orchestrator | 2025-05-30 01:41:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:17.595792 | orchestrator | 2025-05-30 01:41:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:20.647356 | orchestrator | 2025-05-30 01:41:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:20.647470 | orchestrator | 2025-05-30 01:41:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:23.700869 | orchestrator | 2025-05-30 01:41:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:23.701037 | orchestrator | 2025-05-30 01:41:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:26.745769 | orchestrator | 2025-05-30 01:41:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:26.745829 | orchestrator | 2025-05-30 01:41:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:29.792798 | orchestrator | 2025-05-30 01:41:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:29.792911 | orchestrator | 2025-05-30 01:41:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:32.843841 | orchestrator | 2025-05-30 01:41:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:32.844006 | orchestrator | 2025-05-30 01:41:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:35.898460 | orchestrator | 2025-05-30 01:41:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:35.898565 | orchestrator | 2025-05-30 01:41:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:38.952399 | orchestrator | 2025-05-30 01:41:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:38.952510 | orchestrator | 2025-05-30 01:41:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:42.009179 | orchestrator | 2025-05-30 01:41:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:42.009280 | orchestrator | 2025-05-30 01:41:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:45.069404 | orchestrator | 2025-05-30 01:41:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:45.069543 | orchestrator | 2025-05-30 01:41:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:48.117283 | orchestrator | 2025-05-30 01:41:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:48.117368 | orchestrator | 2025-05-30 01:41:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:51.153909 | orchestrator | 2025-05-30 01:41:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:51.154098 | orchestrator | 2025-05-30 01:41:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:54.209264 | orchestrator | 2025-05-30 01:41:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:54.209395 | orchestrator | 2025-05-30 01:41:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:41:57.253347 | orchestrator | 2025-05-30 01:41:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:41:57.253455 | orchestrator | 2025-05-30 01:41:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:00.303838 | orchestrator | 2025-05-30 01:42:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:00.304041 | orchestrator | 2025-05-30 01:42:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:03.343572 | orchestrator | 2025-05-30 01:42:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:03.343711 | orchestrator | 2025-05-30 01:42:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:06.389806 | orchestrator | 2025-05-30 01:42:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:06.389911 | orchestrator | 2025-05-30 01:42:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:09.440542 | orchestrator | 2025-05-30 01:42:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:09.440636 | orchestrator | 2025-05-30 01:42:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:12.490481 | orchestrator | 2025-05-30 01:42:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:12.490571 | orchestrator | 2025-05-30 01:42:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:15.528246 | orchestrator | 2025-05-30 01:42:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:15.528360 | orchestrator | 2025-05-30 01:42:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:18.574349 | orchestrator | 2025-05-30 01:42:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:18.574454 | orchestrator | 2025-05-30 01:42:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:21.625379 | orchestrator | 2025-05-30 01:42:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:21.625519 | orchestrator | 2025-05-30 01:42:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:24.687192 | orchestrator | 2025-05-30 01:42:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:24.687295 | orchestrator | 2025-05-30 01:42:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:27.738475 | orchestrator | 2025-05-30 01:42:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:27.738572 | orchestrator | 2025-05-30 01:42:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:30.792676 | orchestrator | 2025-05-30 01:42:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:30.792793 | orchestrator | 2025-05-30 01:42:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:33.834877 | orchestrator | 2025-05-30 01:42:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:33.835048 | orchestrator | 2025-05-30 01:42:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:36.885374 | orchestrator | 2025-05-30 01:42:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:36.885476 | orchestrator | 2025-05-30 01:42:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:39.937468 | orchestrator | 2025-05-30 01:42:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:39.937583 | orchestrator | 2025-05-30 01:42:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:42.987288 | orchestrator | 2025-05-30 01:42:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:42.987391 | orchestrator | 2025-05-30 01:42:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:46.039112 | orchestrator | 2025-05-30 01:42:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:46.039210 | orchestrator | 2025-05-30 01:42:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:49.086309 | orchestrator | 2025-05-30 01:42:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:49.086443 | orchestrator | 2025-05-30 01:42:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:52.130500 | orchestrator | 2025-05-30 01:42:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:52.130607 | orchestrator | 2025-05-30 01:42:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:55.176400 | orchestrator | 2025-05-30 01:42:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:55.176502 | orchestrator | 2025-05-30 01:42:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:42:58.220383 | orchestrator | 2025-05-30 01:42:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:42:58.220498 | orchestrator | 2025-05-30 01:42:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:01.278876 | orchestrator | 2025-05-30 01:43:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:01.279025 | orchestrator | 2025-05-30 01:43:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:04.329515 | orchestrator | 2025-05-30 01:43:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:04.329631 | orchestrator | 2025-05-30 01:43:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:07.380501 | orchestrator | 2025-05-30 01:43:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:07.380621 | orchestrator | 2025-05-30 01:43:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:10.436966 | orchestrator | 2025-05-30 01:43:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:10.437137 | orchestrator | 2025-05-30 01:43:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:13.482575 | orchestrator | 2025-05-30 01:43:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:13.482681 | orchestrator | 2025-05-30 01:43:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:16.531474 | orchestrator | 2025-05-30 01:43:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:16.531584 | orchestrator | 2025-05-30 01:43:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:19.582306 | orchestrator | 2025-05-30 01:43:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:19.582412 | orchestrator | 2025-05-30 01:43:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:22.630271 | orchestrator | 2025-05-30 01:43:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:22.630394 | orchestrator | 2025-05-30 01:43:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:25.674636 | orchestrator | 2025-05-30 01:43:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:25.674734 | orchestrator | 2025-05-30 01:43:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:28.716625 | orchestrator | 2025-05-30 01:43:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:28.716749 | orchestrator | 2025-05-30 01:43:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:31.770241 | orchestrator | 2025-05-30 01:43:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:31.770345 | orchestrator | 2025-05-30 01:43:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:34.817390 | orchestrator | 2025-05-30 01:43:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:34.817530 | orchestrator | 2025-05-30 01:43:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:37.864317 | orchestrator | 2025-05-30 01:43:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:37.864420 | orchestrator | 2025-05-30 01:43:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:40.908508 | orchestrator | 2025-05-30 01:43:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:40.908643 | orchestrator | 2025-05-30 01:43:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:43.959589 | orchestrator | 2025-05-30 01:43:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:43.959697 | orchestrator | 2025-05-30 01:43:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:47.017488 | orchestrator | 2025-05-30 01:43:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:47.017587 | orchestrator | 2025-05-30 01:43:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:50.073097 | orchestrator | 2025-05-30 01:43:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:50.073201 | orchestrator | 2025-05-30 01:43:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:53.114943 | orchestrator | 2025-05-30 01:43:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:53.115082 | orchestrator | 2025-05-30 01:43:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:56.167919 | orchestrator | 2025-05-30 01:43:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:56.168072 | orchestrator | 2025-05-30 01:43:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:43:59.220386 | orchestrator | 2025-05-30 01:43:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:43:59.220500 | orchestrator | 2025-05-30 01:43:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:02.275857 | orchestrator | 2025-05-30 01:44:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:02.275969 | orchestrator | 2025-05-30 01:44:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:05.323355 | orchestrator | 2025-05-30 01:44:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:05.323494 | orchestrator | 2025-05-30 01:44:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:08.372053 | orchestrator | 2025-05-30 01:44:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:08.372158 | orchestrator | 2025-05-30 01:44:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:11.422865 | orchestrator | 2025-05-30 01:44:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:11.422978 | orchestrator | 2025-05-30 01:44:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:14.474817 | orchestrator | 2025-05-30 01:44:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:14.474927 | orchestrator | 2025-05-30 01:44:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:17.524657 | orchestrator | 2025-05-30 01:44:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:17.524765 | orchestrator | 2025-05-30 01:44:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:20.571304 | orchestrator | 2025-05-30 01:44:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:20.571417 | orchestrator | 2025-05-30 01:44:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:23.625423 | orchestrator | 2025-05-30 01:44:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:23.625519 | orchestrator | 2025-05-30 01:44:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:26.690697 | orchestrator | 2025-05-30 01:44:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:26.692207 | orchestrator | 2025-05-30 01:44:26 | INFO  | Task c8147555-6464-42f0-8ba0-073a3265da11 is in state STARTED 2025-05-30 01:44:26.692238 | orchestrator | 2025-05-30 01:44:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:29.749870 | orchestrator | 2025-05-30 01:44:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:29.751281 | orchestrator | 2025-05-30 01:44:29 | INFO  | Task c8147555-6464-42f0-8ba0-073a3265da11 is in state STARTED 2025-05-30 01:44:29.751340 | orchestrator | 2025-05-30 01:44:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:32.813731 | orchestrator | 2025-05-30 01:44:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:32.815467 | orchestrator | 2025-05-30 01:44:32 | INFO  | Task c8147555-6464-42f0-8ba0-073a3265da11 is in state STARTED 2025-05-30 01:44:32.816180 | orchestrator | 2025-05-30 01:44:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:35.873480 | orchestrator | 2025-05-30 01:44:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:35.873578 | orchestrator | 2025-05-30 01:44:35 | INFO  | Task c8147555-6464-42f0-8ba0-073a3265da11 is in state STARTED 2025-05-30 01:44:35.873591 | orchestrator | 2025-05-30 01:44:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:38.920806 | orchestrator | 2025-05-30 01:44:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:38.920908 | orchestrator | 2025-05-30 01:44:38 | INFO  | Task c8147555-6464-42f0-8ba0-073a3265da11 is in state SUCCESS 2025-05-30 01:44:38.920925 | orchestrator | 2025-05-30 01:44:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:41.973825 | orchestrator | 2025-05-30 01:44:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:41.973921 | orchestrator | 2025-05-30 01:44:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:45.020259 | orchestrator | 2025-05-30 01:44:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:45.020351 | orchestrator | 2025-05-30 01:44:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:48.066796 | orchestrator | 2025-05-30 01:44:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:48.066926 | orchestrator | 2025-05-30 01:44:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:51.112927 | orchestrator | 2025-05-30 01:44:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:51.113065 | orchestrator | 2025-05-30 01:44:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:54.161805 | orchestrator | 2025-05-30 01:44:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:54.161922 | orchestrator | 2025-05-30 01:44:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:44:57.216578 | orchestrator | 2025-05-30 01:44:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:44:57.216732 | orchestrator | 2025-05-30 01:44:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:00.264742 | orchestrator | 2025-05-30 01:45:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:00.264840 | orchestrator | 2025-05-30 01:45:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:03.319580 | orchestrator | 2025-05-30 01:45:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:03.319683 | orchestrator | 2025-05-30 01:45:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:06.370538 | orchestrator | 2025-05-30 01:45:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:06.370658 | orchestrator | 2025-05-30 01:45:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:09.414744 | orchestrator | 2025-05-30 01:45:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:09.414850 | orchestrator | 2025-05-30 01:45:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:12.462914 | orchestrator | 2025-05-30 01:45:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:12.463053 | orchestrator | 2025-05-30 01:45:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:15.514842 | orchestrator | 2025-05-30 01:45:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:15.514946 | orchestrator | 2025-05-30 01:45:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:18.567544 | orchestrator | 2025-05-30 01:45:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:18.567661 | orchestrator | 2025-05-30 01:45:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:21.612969 | orchestrator | 2025-05-30 01:45:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:21.613138 | orchestrator | 2025-05-30 01:45:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:24.662777 | orchestrator | 2025-05-30 01:45:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:24.662912 | orchestrator | 2025-05-30 01:45:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:27.712692 | orchestrator | 2025-05-30 01:45:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:27.712818 | orchestrator | 2025-05-30 01:45:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:30.755589 | orchestrator | 2025-05-30 01:45:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:30.755711 | orchestrator | 2025-05-30 01:45:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:33.809621 | orchestrator | 2025-05-30 01:45:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:33.809724 | orchestrator | 2025-05-30 01:45:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:36.857460 | orchestrator | 2025-05-30 01:45:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:36.857580 | orchestrator | 2025-05-30 01:45:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:39.904164 | orchestrator | 2025-05-30 01:45:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:39.904225 | orchestrator | 2025-05-30 01:45:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:42.949299 | orchestrator | 2025-05-30 01:45:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:42.949422 | orchestrator | 2025-05-30 01:45:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:46.006902 | orchestrator | 2025-05-30 01:45:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:46.007006 | orchestrator | 2025-05-30 01:45:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:49.060405 | orchestrator | 2025-05-30 01:45:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:49.060514 | orchestrator | 2025-05-30 01:45:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:52.107779 | orchestrator | 2025-05-30 01:45:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:52.107920 | orchestrator | 2025-05-30 01:45:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:55.160429 | orchestrator | 2025-05-30 01:45:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:55.160569 | orchestrator | 2025-05-30 01:45:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:45:58.206413 | orchestrator | 2025-05-30 01:45:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:45:58.206531 | orchestrator | 2025-05-30 01:45:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:01.251529 | orchestrator | 2025-05-30 01:46:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:01.251638 | orchestrator | 2025-05-30 01:46:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:04.301565 | orchestrator | 2025-05-30 01:46:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:04.301653 | orchestrator | 2025-05-30 01:46:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:07.350149 | orchestrator | 2025-05-30 01:46:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:07.350286 | orchestrator | 2025-05-30 01:46:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:10.399619 | orchestrator | 2025-05-30 01:46:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:10.399745 | orchestrator | 2025-05-30 01:46:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:13.443984 | orchestrator | 2025-05-30 01:46:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:13.444096 | orchestrator | 2025-05-30 01:46:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:16.487576 | orchestrator | 2025-05-30 01:46:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:16.487688 | orchestrator | 2025-05-30 01:46:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:19.545719 | orchestrator | 2025-05-30 01:46:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:19.545815 | orchestrator | 2025-05-30 01:46:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:22.588378 | orchestrator | 2025-05-30 01:46:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:22.588476 | orchestrator | 2025-05-30 01:46:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:25.634981 | orchestrator | 2025-05-30 01:46:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:25.635918 | orchestrator | 2025-05-30 01:46:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:28.683470 | orchestrator | 2025-05-30 01:46:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:28.683598 | orchestrator | 2025-05-30 01:46:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:31.731590 | orchestrator | 2025-05-30 01:46:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:31.731696 | orchestrator | 2025-05-30 01:46:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:34.787015 | orchestrator | 2025-05-30 01:46:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:34.787101 | orchestrator | 2025-05-30 01:46:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:37.835359 | orchestrator | 2025-05-30 01:46:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:37.835480 | orchestrator | 2025-05-30 01:46:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:40.886263 | orchestrator | 2025-05-30 01:46:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:40.886349 | orchestrator | 2025-05-30 01:46:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:43.937643 | orchestrator | 2025-05-30 01:46:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:43.937758 | orchestrator | 2025-05-30 01:46:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:46.984931 | orchestrator | 2025-05-30 01:46:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:46.985120 | orchestrator | 2025-05-30 01:46:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:50.039434 | orchestrator | 2025-05-30 01:46:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:50.039531 | orchestrator | 2025-05-30 01:46:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:53.084076 | orchestrator | 2025-05-30 01:46:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:53.084182 | orchestrator | 2025-05-30 01:46:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:56.143956 | orchestrator | 2025-05-30 01:46:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:56.144152 | orchestrator | 2025-05-30 01:46:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:46:59.191726 | orchestrator | 2025-05-30 01:46:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:46:59.191818 | orchestrator | 2025-05-30 01:46:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:02.233630 | orchestrator | 2025-05-30 01:47:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:02.233751 | orchestrator | 2025-05-30 01:47:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:05.286378 | orchestrator | 2025-05-30 01:47:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:05.286479 | orchestrator | 2025-05-30 01:47:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:08.336484 | orchestrator | 2025-05-30 01:47:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:08.336601 | orchestrator | 2025-05-30 01:47:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:11.388469 | orchestrator | 2025-05-30 01:47:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:11.388574 | orchestrator | 2025-05-30 01:47:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:14.433730 | orchestrator | 2025-05-30 01:47:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:14.433874 | orchestrator | 2025-05-30 01:47:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:17.492897 | orchestrator | 2025-05-30 01:47:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:17.492999 | orchestrator | 2025-05-30 01:47:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:20.540770 | orchestrator | 2025-05-30 01:47:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:20.540900 | orchestrator | 2025-05-30 01:47:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:23.595884 | orchestrator | 2025-05-30 01:47:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:23.595982 | orchestrator | 2025-05-30 01:47:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:26.643953 | orchestrator | 2025-05-30 01:47:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:26.644146 | orchestrator | 2025-05-30 01:47:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:29.689909 | orchestrator | 2025-05-30 01:47:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:29.690015 | orchestrator | 2025-05-30 01:47:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:32.740767 | orchestrator | 2025-05-30 01:47:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:32.740885 | orchestrator | 2025-05-30 01:47:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:35.790189 | orchestrator | 2025-05-30 01:47:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:35.790288 | orchestrator | 2025-05-30 01:47:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:38.841379 | orchestrator | 2025-05-30 01:47:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:38.841457 | orchestrator | 2025-05-30 01:47:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:41.890267 | orchestrator | 2025-05-30 01:47:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:41.890369 | orchestrator | 2025-05-30 01:47:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:44.933248 | orchestrator | 2025-05-30 01:47:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:44.933391 | orchestrator | 2025-05-30 01:47:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:47.983806 | orchestrator | 2025-05-30 01:47:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:47.983916 | orchestrator | 2025-05-30 01:47:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:51.042589 | orchestrator | 2025-05-30 01:47:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:51.042702 | orchestrator | 2025-05-30 01:47:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:54.097321 | orchestrator | 2025-05-30 01:47:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:54.097447 | orchestrator | 2025-05-30 01:47:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:47:57.142885 | orchestrator | 2025-05-30 01:47:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:47:57.142989 | orchestrator | 2025-05-30 01:47:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:00.195471 | orchestrator | 2025-05-30 01:48:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:00.195588 | orchestrator | 2025-05-30 01:48:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:03.241108 | orchestrator | 2025-05-30 01:48:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:03.241199 | orchestrator | 2025-05-30 01:48:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:06.294132 | orchestrator | 2025-05-30 01:48:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:06.294231 | orchestrator | 2025-05-30 01:48:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:09.352758 | orchestrator | 2025-05-30 01:48:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:09.352889 | orchestrator | 2025-05-30 01:48:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:12.403529 | orchestrator | 2025-05-30 01:48:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:12.403635 | orchestrator | 2025-05-30 01:48:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:15.449920 | orchestrator | 2025-05-30 01:48:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:15.450155 | orchestrator | 2025-05-30 01:48:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:18.499442 | orchestrator | 2025-05-30 01:48:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:18.499549 | orchestrator | 2025-05-30 01:48:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:21.545068 | orchestrator | 2025-05-30 01:48:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:21.545186 | orchestrator | 2025-05-30 01:48:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:24.595054 | orchestrator | 2025-05-30 01:48:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:24.595147 | orchestrator | 2025-05-30 01:48:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:27.641139 | orchestrator | 2025-05-30 01:48:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:27.641260 | orchestrator | 2025-05-30 01:48:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:30.686913 | orchestrator | 2025-05-30 01:48:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:30.687016 | orchestrator | 2025-05-30 01:48:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:33.731828 | orchestrator | 2025-05-30 01:48:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:33.731937 | orchestrator | 2025-05-30 01:48:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:36.786495 | orchestrator | 2025-05-30 01:48:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:36.786578 | orchestrator | 2025-05-30 01:48:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:39.835942 | orchestrator | 2025-05-30 01:48:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:39.836107 | orchestrator | 2025-05-30 01:48:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:42.886924 | orchestrator | 2025-05-30 01:48:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:42.887028 | orchestrator | 2025-05-30 01:48:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:45.938139 | orchestrator | 2025-05-30 01:48:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:45.938257 | orchestrator | 2025-05-30 01:48:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:48.983410 | orchestrator | 2025-05-30 01:48:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:48.983507 | orchestrator | 2025-05-30 01:48:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:52.031751 | orchestrator | 2025-05-30 01:48:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:52.031848 | orchestrator | 2025-05-30 01:48:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:55.080323 | orchestrator | 2025-05-30 01:48:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:55.080413 | orchestrator | 2025-05-30 01:48:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:48:58.126846 | orchestrator | 2025-05-30 01:48:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:48:58.126960 | orchestrator | 2025-05-30 01:48:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:01.174605 | orchestrator | 2025-05-30 01:49:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:01.174710 | orchestrator | 2025-05-30 01:49:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:04.223630 | orchestrator | 2025-05-30 01:49:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:04.223742 | orchestrator | 2025-05-30 01:49:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:07.274295 | orchestrator | 2025-05-30 01:49:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:07.274401 | orchestrator | 2025-05-30 01:49:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:10.325753 | orchestrator | 2025-05-30 01:49:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:10.325865 | orchestrator | 2025-05-30 01:49:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:13.369524 | orchestrator | 2025-05-30 01:49:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:13.369626 | orchestrator | 2025-05-30 01:49:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:16.420611 | orchestrator | 2025-05-30 01:49:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:16.420721 | orchestrator | 2025-05-30 01:49:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:19.481109 | orchestrator | 2025-05-30 01:49:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:19.481211 | orchestrator | 2025-05-30 01:49:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:22.523286 | orchestrator | 2025-05-30 01:49:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:22.523442 | orchestrator | 2025-05-30 01:49:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:25.571283 | orchestrator | 2025-05-30 01:49:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:25.571383 | orchestrator | 2025-05-30 01:49:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:28.625343 | orchestrator | 2025-05-30 01:49:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:28.626261 | orchestrator | 2025-05-30 01:49:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:31.677518 | orchestrator | 2025-05-30 01:49:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:31.677687 | orchestrator | 2025-05-30 01:49:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:34.727609 | orchestrator | 2025-05-30 01:49:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:34.727812 | orchestrator | 2025-05-30 01:49:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:37.779911 | orchestrator | 2025-05-30 01:49:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:37.780084 | orchestrator | 2025-05-30 01:49:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:40.834737 | orchestrator | 2025-05-30 01:49:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:40.834854 | orchestrator | 2025-05-30 01:49:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:43.880469 | orchestrator | 2025-05-30 01:49:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:43.881401 | orchestrator | 2025-05-30 01:49:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:46.927383 | orchestrator | 2025-05-30 01:49:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:46.927493 | orchestrator | 2025-05-30 01:49:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:49.971617 | orchestrator | 2025-05-30 01:49:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:49.971725 | orchestrator | 2025-05-30 01:49:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:53.022155 | orchestrator | 2025-05-30 01:49:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:53.022301 | orchestrator | 2025-05-30 01:49:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:56.070771 | orchestrator | 2025-05-30 01:49:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:56.070877 | orchestrator | 2025-05-30 01:49:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:49:59.119188 | orchestrator | 2025-05-30 01:49:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:49:59.119273 | orchestrator | 2025-05-30 01:49:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:02.163626 | orchestrator | 2025-05-30 01:50:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:02.163764 | orchestrator | 2025-05-30 01:50:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:05.216946 | orchestrator | 2025-05-30 01:50:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:05.217144 | orchestrator | 2025-05-30 01:50:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:08.261819 | orchestrator | 2025-05-30 01:50:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:08.261978 | orchestrator | 2025-05-30 01:50:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:11.307010 | orchestrator | 2025-05-30 01:50:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:11.307161 | orchestrator | 2025-05-30 01:50:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:14.364628 | orchestrator | 2025-05-30 01:50:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:14.364761 | orchestrator | 2025-05-30 01:50:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:17.413785 | orchestrator | 2025-05-30 01:50:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:17.413948 | orchestrator | 2025-05-30 01:50:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:20.469285 | orchestrator | 2025-05-30 01:50:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:20.469416 | orchestrator | 2025-05-30 01:50:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:23.515963 | orchestrator | 2025-05-30 01:50:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:23.516099 | orchestrator | 2025-05-30 01:50:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:26.556902 | orchestrator | 2025-05-30 01:50:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:26.557035 | orchestrator | 2025-05-30 01:50:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:29.606561 | orchestrator | 2025-05-30 01:50:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:29.606718 | orchestrator | 2025-05-30 01:50:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:32.651670 | orchestrator | 2025-05-30 01:50:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:32.651805 | orchestrator | 2025-05-30 01:50:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:35.701254 | orchestrator | 2025-05-30 01:50:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:35.701340 | orchestrator | 2025-05-30 01:50:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:38.747672 | orchestrator | 2025-05-30 01:50:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:38.747800 | orchestrator | 2025-05-30 01:50:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:41.793151 | orchestrator | 2025-05-30 01:50:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:41.793289 | orchestrator | 2025-05-30 01:50:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:44.846554 | orchestrator | 2025-05-30 01:50:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:44.846686 | orchestrator | 2025-05-30 01:50:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:47.890517 | orchestrator | 2025-05-30 01:50:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:47.890672 | orchestrator | 2025-05-30 01:50:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:50.937845 | orchestrator | 2025-05-30 01:50:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:50.937971 | orchestrator | 2025-05-30 01:50:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:53.984314 | orchestrator | 2025-05-30 01:50:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:53.984420 | orchestrator | 2025-05-30 01:50:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:50:57.032027 | orchestrator | 2025-05-30 01:50:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:50:57.032154 | orchestrator | 2025-05-30 01:50:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:00.077393 | orchestrator | 2025-05-30 01:51:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:00.077534 | orchestrator | 2025-05-30 01:51:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:03.126488 | orchestrator | 2025-05-30 01:51:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:03.126741 | orchestrator | 2025-05-30 01:51:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:06.169261 | orchestrator | 2025-05-30 01:51:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:06.169398 | orchestrator | 2025-05-30 01:51:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:09.213577 | orchestrator | 2025-05-30 01:51:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:09.213725 | orchestrator | 2025-05-30 01:51:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:12.264091 | orchestrator | 2025-05-30 01:51:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:12.264197 | orchestrator | 2025-05-30 01:51:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:15.313296 | orchestrator | 2025-05-30 01:51:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:15.313434 | orchestrator | 2025-05-30 01:51:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:18.358299 | orchestrator | 2025-05-30 01:51:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:18.358400 | orchestrator | 2025-05-30 01:51:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:21.409556 | orchestrator | 2025-05-30 01:51:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:21.409686 | orchestrator | 2025-05-30 01:51:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:24.464239 | orchestrator | 2025-05-30 01:51:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:24.464378 | orchestrator | 2025-05-30 01:51:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:27.507731 | orchestrator | 2025-05-30 01:51:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:27.507862 | orchestrator | 2025-05-30 01:51:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:30.560550 | orchestrator | 2025-05-30 01:51:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:30.560657 | orchestrator | 2025-05-30 01:51:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:33.601842 | orchestrator | 2025-05-30 01:51:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:33.601969 | orchestrator | 2025-05-30 01:51:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:36.643634 | orchestrator | 2025-05-30 01:51:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:36.643759 | orchestrator | 2025-05-30 01:51:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:39.689365 | orchestrator | 2025-05-30 01:51:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:39.689469 | orchestrator | 2025-05-30 01:51:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:42.732766 | orchestrator | 2025-05-30 01:51:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:42.732868 | orchestrator | 2025-05-30 01:51:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:45.782621 | orchestrator | 2025-05-30 01:51:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:45.782759 | orchestrator | 2025-05-30 01:51:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:48.831206 | orchestrator | 2025-05-30 01:51:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:48.831340 | orchestrator | 2025-05-30 01:51:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:51.884111 | orchestrator | 2025-05-30 01:51:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:51.884929 | orchestrator | 2025-05-30 01:51:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:54.930939 | orchestrator | 2025-05-30 01:51:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:54.931061 | orchestrator | 2025-05-30 01:51:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:51:57.982338 | orchestrator | 2025-05-30 01:51:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:51:57.982474 | orchestrator | 2025-05-30 01:51:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:01.037393 | orchestrator | 2025-05-30 01:52:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:01.037532 | orchestrator | 2025-05-30 01:52:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:04.081919 | orchestrator | 2025-05-30 01:52:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:04.082168 | orchestrator | 2025-05-30 01:52:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:07.131604 | orchestrator | 2025-05-30 01:52:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:07.131708 | orchestrator | 2025-05-30 01:52:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:10.185674 | orchestrator | 2025-05-30 01:52:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:10.185780 | orchestrator | 2025-05-30 01:52:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:13.233894 | orchestrator | 2025-05-30 01:52:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:13.234124 | orchestrator | 2025-05-30 01:52:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:16.287037 | orchestrator | 2025-05-30 01:52:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:16.287191 | orchestrator | 2025-05-30 01:52:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:19.334434 | orchestrator | 2025-05-30 01:52:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:19.334545 | orchestrator | 2025-05-30 01:52:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:22.375752 | orchestrator | 2025-05-30 01:52:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:22.375862 | orchestrator | 2025-05-30 01:52:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:25.424866 | orchestrator | 2025-05-30 01:52:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:25.424979 | orchestrator | 2025-05-30 01:52:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:28.473994 | orchestrator | 2025-05-30 01:52:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:28.474191 | orchestrator | 2025-05-30 01:52:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:31.527135 | orchestrator | 2025-05-30 01:52:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:31.527231 | orchestrator | 2025-05-30 01:52:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:34.576730 | orchestrator | 2025-05-30 01:52:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:34.576875 | orchestrator | 2025-05-30 01:52:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:37.619297 | orchestrator | 2025-05-30 01:52:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:37.619413 | orchestrator | 2025-05-30 01:52:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:40.671193 | orchestrator | 2025-05-30 01:52:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:40.671321 | orchestrator | 2025-05-30 01:52:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:43.715675 | orchestrator | 2025-05-30 01:52:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:43.715767 | orchestrator | 2025-05-30 01:52:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:46.762149 | orchestrator | 2025-05-30 01:52:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:46.762263 | orchestrator | 2025-05-30 01:52:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:49.813086 | orchestrator | 2025-05-30 01:52:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:49.813277 | orchestrator | 2025-05-30 01:52:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:52.862901 | orchestrator | 2025-05-30 01:52:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:52.862995 | orchestrator | 2025-05-30 01:52:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:55.905384 | orchestrator | 2025-05-30 01:52:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:55.905497 | orchestrator | 2025-05-30 01:52:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:52:58.957251 | orchestrator | 2025-05-30 01:52:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:52:58.957357 | orchestrator | 2025-05-30 01:52:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:02.013950 | orchestrator | 2025-05-30 01:53:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:02.014079 | orchestrator | 2025-05-30 01:53:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:05.064775 | orchestrator | 2025-05-30 01:53:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:05.064886 | orchestrator | 2025-05-30 01:53:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:08.112747 | orchestrator | 2025-05-30 01:53:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:08.112874 | orchestrator | 2025-05-30 01:53:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:11.160839 | orchestrator | 2025-05-30 01:53:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:11.160965 | orchestrator | 2025-05-30 01:53:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:14.211481 | orchestrator | 2025-05-30 01:53:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:14.211618 | orchestrator | 2025-05-30 01:53:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:17.267831 | orchestrator | 2025-05-30 01:53:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:17.267978 | orchestrator | 2025-05-30 01:53:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:20.320775 | orchestrator | 2025-05-30 01:53:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:20.320880 | orchestrator | 2025-05-30 01:53:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:23.376217 | orchestrator | 2025-05-30 01:53:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:23.376338 | orchestrator | 2025-05-30 01:53:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:26.423231 | orchestrator | 2025-05-30 01:53:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:26.423324 | orchestrator | 2025-05-30 01:53:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:29.467159 | orchestrator | 2025-05-30 01:53:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:29.467292 | orchestrator | 2025-05-30 01:53:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:32.514988 | orchestrator | 2025-05-30 01:53:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:32.515097 | orchestrator | 2025-05-30 01:53:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:35.568941 | orchestrator | 2025-05-30 01:53:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:35.569013 | orchestrator | 2025-05-30 01:53:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:38.619011 | orchestrator | 2025-05-30 01:53:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:38.619187 | orchestrator | 2025-05-30 01:53:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:41.664078 | orchestrator | 2025-05-30 01:53:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:41.664213 | orchestrator | 2025-05-30 01:53:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:44.720292 | orchestrator | 2025-05-30 01:53:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:44.720403 | orchestrator | 2025-05-30 01:53:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:47.762718 | orchestrator | 2025-05-30 01:53:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:47.762887 | orchestrator | 2025-05-30 01:53:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:50.812781 | orchestrator | 2025-05-30 01:53:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:50.812922 | orchestrator | 2025-05-30 01:53:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:53.867264 | orchestrator | 2025-05-30 01:53:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:53.867396 | orchestrator | 2025-05-30 01:53:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:56.914860 | orchestrator | 2025-05-30 01:53:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:56.914971 | orchestrator | 2025-05-30 01:53:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:53:59.958817 | orchestrator | 2025-05-30 01:53:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:53:59.958915 | orchestrator | 2025-05-30 01:53:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:03.009065 | orchestrator | 2025-05-30 01:54:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:03.009223 | orchestrator | 2025-05-30 01:54:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:06.059727 | orchestrator | 2025-05-30 01:54:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:06.059865 | orchestrator | 2025-05-30 01:54:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:09.102315 | orchestrator | 2025-05-30 01:54:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:09.102408 | orchestrator | 2025-05-30 01:54:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:12.147805 | orchestrator | 2025-05-30 01:54:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:12.147912 | orchestrator | 2025-05-30 01:54:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:15.192592 | orchestrator | 2025-05-30 01:54:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:15.192722 | orchestrator | 2025-05-30 01:54:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:18.240518 | orchestrator | 2025-05-30 01:54:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:18.240632 | orchestrator | 2025-05-30 01:54:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:21.288639 | orchestrator | 2025-05-30 01:54:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:21.288727 | orchestrator | 2025-05-30 01:54:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:24.340912 | orchestrator | 2025-05-30 01:54:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:24.340990 | orchestrator | 2025-05-30 01:54:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:27.399981 | orchestrator | 2025-05-30 01:54:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:27.401368 | orchestrator | 2025-05-30 01:54:27 | INFO  | Task b7f7c5d9-21ed-4ccd-8689-c186fe43273a is in state STARTED 2025-05-30 01:54:27.401399 | orchestrator | 2025-05-30 01:54:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:30.455372 | orchestrator | 2025-05-30 01:54:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:30.456925 | orchestrator | 2025-05-30 01:54:30 | INFO  | Task b7f7c5d9-21ed-4ccd-8689-c186fe43273a is in state STARTED 2025-05-30 01:54:30.457236 | orchestrator | 2025-05-30 01:54:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:33.513050 | orchestrator | 2025-05-30 01:54:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:33.519171 | orchestrator | 2025-05-30 01:54:33 | INFO  | Task b7f7c5d9-21ed-4ccd-8689-c186fe43273a is in state STARTED 2025-05-30 01:54:33.519227 | orchestrator | 2025-05-30 01:54:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:36.570550 | orchestrator | 2025-05-30 01:54:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:36.571492 | orchestrator | 2025-05-30 01:54:36 | INFO  | Task b7f7c5d9-21ed-4ccd-8689-c186fe43273a is in state SUCCESS 2025-05-30 01:54:36.571624 | orchestrator | 2025-05-30 01:54:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:39.620985 | orchestrator | 2025-05-30 01:54:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:39.621177 | orchestrator | 2025-05-30 01:54:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:42.665361 | orchestrator | 2025-05-30 01:54:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:42.665507 | orchestrator | 2025-05-30 01:54:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:45.714366 | orchestrator | 2025-05-30 01:54:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:45.714469 | orchestrator | 2025-05-30 01:54:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:48.760517 | orchestrator | 2025-05-30 01:54:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:48.760569 | orchestrator | 2025-05-30 01:54:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:51.804745 | orchestrator | 2025-05-30 01:54:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:51.804859 | orchestrator | 2025-05-30 01:54:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:54.847939 | orchestrator | 2025-05-30 01:54:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:54.848038 | orchestrator | 2025-05-30 01:54:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:54:57.897725 | orchestrator | 2025-05-30 01:54:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:54:57.897846 | orchestrator | 2025-05-30 01:54:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:00.949889 | orchestrator | 2025-05-30 01:55:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:00.949976 | orchestrator | 2025-05-30 01:55:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:04.007485 | orchestrator | 2025-05-30 01:55:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:04.007667 | orchestrator | 2025-05-30 01:55:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:07.057708 | orchestrator | 2025-05-30 01:55:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:07.057810 | orchestrator | 2025-05-30 01:55:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:10.110697 | orchestrator | 2025-05-30 01:55:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:10.110820 | orchestrator | 2025-05-30 01:55:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:13.154296 | orchestrator | 2025-05-30 01:55:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:13.154421 | orchestrator | 2025-05-30 01:55:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:16.205520 | orchestrator | 2025-05-30 01:55:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:16.205662 | orchestrator | 2025-05-30 01:55:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:19.249201 | orchestrator | 2025-05-30 01:55:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:19.249315 | orchestrator | 2025-05-30 01:55:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:22.299520 | orchestrator | 2025-05-30 01:55:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:22.299615 | orchestrator | 2025-05-30 01:55:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:25.351866 | orchestrator | 2025-05-30 01:55:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:25.351971 | orchestrator | 2025-05-30 01:55:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:28.396587 | orchestrator | 2025-05-30 01:55:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:28.396712 | orchestrator | 2025-05-30 01:55:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:31.454429 | orchestrator | 2025-05-30 01:55:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:31.454533 | orchestrator | 2025-05-30 01:55:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:34.502715 | orchestrator | 2025-05-30 01:55:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:34.502825 | orchestrator | 2025-05-30 01:55:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:37.551052 | orchestrator | 2025-05-30 01:55:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:37.551251 | orchestrator | 2025-05-30 01:55:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:40.600730 | orchestrator | 2025-05-30 01:55:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:40.600862 | orchestrator | 2025-05-30 01:55:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:43.647496 | orchestrator | 2025-05-30 01:55:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:43.647582 | orchestrator | 2025-05-30 01:55:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:46.689836 | orchestrator | 2025-05-30 01:55:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:46.689934 | orchestrator | 2025-05-30 01:55:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:49.740570 | orchestrator | 2025-05-30 01:55:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:49.740657 | orchestrator | 2025-05-30 01:55:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:52.791785 | orchestrator | 2025-05-30 01:55:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:52.791900 | orchestrator | 2025-05-30 01:55:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:55.843473 | orchestrator | 2025-05-30 01:55:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:55.843547 | orchestrator | 2025-05-30 01:55:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:55:58.891681 | orchestrator | 2025-05-30 01:55:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:55:58.891797 | orchestrator | 2025-05-30 01:55:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:01.937454 | orchestrator | 2025-05-30 01:56:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:01.937563 | orchestrator | 2025-05-30 01:56:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:04.981335 | orchestrator | 2025-05-30 01:56:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:04.981448 | orchestrator | 2025-05-30 01:56:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:08.031465 | orchestrator | 2025-05-30 01:56:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:08.031571 | orchestrator | 2025-05-30 01:56:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:11.080576 | orchestrator | 2025-05-30 01:56:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:11.080683 | orchestrator | 2025-05-30 01:56:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:14.127191 | orchestrator | 2025-05-30 01:56:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:14.127279 | orchestrator | 2025-05-30 01:56:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:17.168980 | orchestrator | 2025-05-30 01:56:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:17.169053 | orchestrator | 2025-05-30 01:56:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:20.218873 | orchestrator | 2025-05-30 01:56:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:20.218964 | orchestrator | 2025-05-30 01:56:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:23.268746 | orchestrator | 2025-05-30 01:56:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:23.268846 | orchestrator | 2025-05-30 01:56:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:26.322769 | orchestrator | 2025-05-30 01:56:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:26.322889 | orchestrator | 2025-05-30 01:56:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:29.377249 | orchestrator | 2025-05-30 01:56:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:29.377364 | orchestrator | 2025-05-30 01:56:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:32.424301 | orchestrator | 2025-05-30 01:56:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:32.424411 | orchestrator | 2025-05-30 01:56:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:35.469912 | orchestrator | 2025-05-30 01:56:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:35.470079 | orchestrator | 2025-05-30 01:56:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:38.516556 | orchestrator | 2025-05-30 01:56:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:38.516659 | orchestrator | 2025-05-30 01:56:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:41.560743 | orchestrator | 2025-05-30 01:56:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:41.560842 | orchestrator | 2025-05-30 01:56:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:44.608826 | orchestrator | 2025-05-30 01:56:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:44.608911 | orchestrator | 2025-05-30 01:56:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:47.663320 | orchestrator | 2025-05-30 01:56:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:47.663421 | orchestrator | 2025-05-30 01:56:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:50.711544 | orchestrator | 2025-05-30 01:56:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:50.711648 | orchestrator | 2025-05-30 01:56:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:53.761994 | orchestrator | 2025-05-30 01:56:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:53.762184 | orchestrator | 2025-05-30 01:56:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:56.808736 | orchestrator | 2025-05-30 01:56:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:56.808840 | orchestrator | 2025-05-30 01:56:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:56:59.856586 | orchestrator | 2025-05-30 01:56:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:56:59.856730 | orchestrator | 2025-05-30 01:56:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:02.904837 | orchestrator | 2025-05-30 01:57:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:02.904941 | orchestrator | 2025-05-30 01:57:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:05.953478 | orchestrator | 2025-05-30 01:57:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:05.953601 | orchestrator | 2025-05-30 01:57:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:09.003846 | orchestrator | 2025-05-30 01:57:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:09.003949 | orchestrator | 2025-05-30 01:57:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:12.050609 | orchestrator | 2025-05-30 01:57:12 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:12.051137 | orchestrator | 2025-05-30 01:57:12 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:15.096528 | orchestrator | 2025-05-30 01:57:15 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:15.096649 | orchestrator | 2025-05-30 01:57:15 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:18.144894 | orchestrator | 2025-05-30 01:57:18 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:18.145008 | orchestrator | 2025-05-30 01:57:18 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:21.191379 | orchestrator | 2025-05-30 01:57:21 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:21.191486 | orchestrator | 2025-05-30 01:57:21 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:24.237223 | orchestrator | 2025-05-30 01:57:24 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:24.237325 | orchestrator | 2025-05-30 01:57:24 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:27.291965 | orchestrator | 2025-05-30 01:57:27 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:27.292066 | orchestrator | 2025-05-30 01:57:27 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:30.340332 | orchestrator | 2025-05-30 01:57:30 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:30.340447 | orchestrator | 2025-05-30 01:57:30 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:33.389567 | orchestrator | 2025-05-30 01:57:33 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:33.389672 | orchestrator | 2025-05-30 01:57:33 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:36.433351 | orchestrator | 2025-05-30 01:57:36 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:36.433468 | orchestrator | 2025-05-30 01:57:36 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:39.482832 | orchestrator | 2025-05-30 01:57:39 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:39.482937 | orchestrator | 2025-05-30 01:57:39 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:42.518367 | orchestrator | 2025-05-30 01:57:42 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:42.518481 | orchestrator | 2025-05-30 01:57:42 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:45.570750 | orchestrator | 2025-05-30 01:57:45 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:45.570890 | orchestrator | 2025-05-30 01:57:45 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:48.621411 | orchestrator | 2025-05-30 01:57:48 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:48.621556 | orchestrator | 2025-05-30 01:57:48 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:51.673942 | orchestrator | 2025-05-30 01:57:51 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:51.674191 | orchestrator | 2025-05-30 01:57:51 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:54.719799 | orchestrator | 2025-05-30 01:57:54 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:54.719888 | orchestrator | 2025-05-30 01:57:54 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:57:57.770683 | orchestrator | 2025-05-30 01:57:57 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:57:57.770773 | orchestrator | 2025-05-30 01:57:57 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:00.821184 | orchestrator | 2025-05-30 01:58:00 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:00.821296 | orchestrator | 2025-05-30 01:58:00 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:03.870240 | orchestrator | 2025-05-30 01:58:03 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:03.870342 | orchestrator | 2025-05-30 01:58:03 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:06.915523 | orchestrator | 2025-05-30 01:58:06 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:06.915639 | orchestrator | 2025-05-30 01:58:06 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:09.958643 | orchestrator | 2025-05-30 01:58:09 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:09.958745 | orchestrator | 2025-05-30 01:58:09 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:13.004179 | orchestrator | 2025-05-30 01:58:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:13.004286 | orchestrator | 2025-05-30 01:58:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:16.055399 | orchestrator | 2025-05-30 01:58:16 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:16.055505 | orchestrator | 2025-05-30 01:58:16 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:19.099272 | orchestrator | 2025-05-30 01:58:19 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:19.099386 | orchestrator | 2025-05-30 01:58:19 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:22.144789 | orchestrator | 2025-05-30 01:58:22 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:22.144892 | orchestrator | 2025-05-30 01:58:22 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:25.188325 | orchestrator | 2025-05-30 01:58:25 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:25.188439 | orchestrator | 2025-05-30 01:58:25 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:28.246277 | orchestrator | 2025-05-30 01:58:28 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:28.246398 | orchestrator | 2025-05-30 01:58:28 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:31.303080 | orchestrator | 2025-05-30 01:58:31 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:31.303229 | orchestrator | 2025-05-30 01:58:31 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:34.351199 | orchestrator | 2025-05-30 01:58:34 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:34.351333 | orchestrator | 2025-05-30 01:58:34 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:37.404078 | orchestrator | 2025-05-30 01:58:37 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:37.404320 | orchestrator | 2025-05-30 01:58:37 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:40.448843 | orchestrator | 2025-05-30 01:58:40 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:40.448960 | orchestrator | 2025-05-30 01:58:40 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:43.495391 | orchestrator | 2025-05-30 01:58:43 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:43.495506 | orchestrator | 2025-05-30 01:58:43 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:46.547573 | orchestrator | 2025-05-30 01:58:46 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:46.547701 | orchestrator | 2025-05-30 01:58:46 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:49.592600 | orchestrator | 2025-05-30 01:58:49 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:49.592729 | orchestrator | 2025-05-30 01:58:49 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:52.637078 | orchestrator | 2025-05-30 01:58:52 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:52.637221 | orchestrator | 2025-05-30 01:58:52 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:55.682598 | orchestrator | 2025-05-30 01:58:55 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:55.682695 | orchestrator | 2025-05-30 01:58:55 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:58:58.732717 | orchestrator | 2025-05-30 01:58:58 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:58:58.732824 | orchestrator | 2025-05-30 01:58:58 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:01.777588 | orchestrator | 2025-05-30 01:59:01 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:01.777679 | orchestrator | 2025-05-30 01:59:01 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:04.819158 | orchestrator | 2025-05-30 01:59:04 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:04.819252 | orchestrator | 2025-05-30 01:59:04 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:07.871700 | orchestrator | 2025-05-30 01:59:07 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:07.871811 | orchestrator | 2025-05-30 01:59:07 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:10.916648 | orchestrator | 2025-05-30 01:59:10 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:10.916770 | orchestrator | 2025-05-30 01:59:10 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:13.962330 | orchestrator | 2025-05-30 01:59:13 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:13.962444 | orchestrator | 2025-05-30 01:59:13 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:17.013886 | orchestrator | 2025-05-30 01:59:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:17.014133 | orchestrator | 2025-05-30 01:59:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:20.065497 | orchestrator | 2025-05-30 01:59:20 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:20.065606 | orchestrator | 2025-05-30 01:59:20 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:23.109550 | orchestrator | 2025-05-30 01:59:23 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:23.109655 | orchestrator | 2025-05-30 01:59:23 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:26.156805 | orchestrator | 2025-05-30 01:59:26 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:26.156918 | orchestrator | 2025-05-30 01:59:26 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:29.202517 | orchestrator | 2025-05-30 01:59:29 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:29.202624 | orchestrator | 2025-05-30 01:59:29 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:32.252004 | orchestrator | 2025-05-30 01:59:32 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:32.252171 | orchestrator | 2025-05-30 01:59:32 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:35.306122 | orchestrator | 2025-05-30 01:59:35 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:35.306201 | orchestrator | 2025-05-30 01:59:35 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:38.344341 | orchestrator | 2025-05-30 01:59:38 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:38.344424 | orchestrator | 2025-05-30 01:59:38 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:41.389997 | orchestrator | 2025-05-30 01:59:41 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:41.390230 | orchestrator | 2025-05-30 01:59:41 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:44.433038 | orchestrator | 2025-05-30 01:59:44 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:44.433164 | orchestrator | 2025-05-30 01:59:44 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:47.487183 | orchestrator | 2025-05-30 01:59:47 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:47.488040 | orchestrator | 2025-05-30 01:59:47 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:50.544020 | orchestrator | 2025-05-30 01:59:50 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:50.544176 | orchestrator | 2025-05-30 01:59:50 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:53.596149 | orchestrator | 2025-05-30 01:59:53 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:53.596255 | orchestrator | 2025-05-30 01:59:53 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:56.649882 | orchestrator | 2025-05-30 01:59:56 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:56.650076 | orchestrator | 2025-05-30 01:59:56 | INFO  | Wait 1 second(s) until the next check 2025-05-30 01:59:59.699532 | orchestrator | 2025-05-30 01:59:59 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 01:59:59.699616 | orchestrator | 2025-05-30 01:59:59 | INFO  | Wait 1 second(s) until the next check 2025-05-30 02:00:02.747909 | orchestrator | 2025-05-30 02:00:02 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 02:00:02.748031 | orchestrator | 2025-05-30 02:00:02 | INFO  | Wait 1 second(s) until the next check 2025-05-30 02:00:05.795256 | orchestrator | 2025-05-30 02:00:05 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 02:00:05.795362 | orchestrator | 2025-05-30 02:00:05 | INFO  | Wait 1 second(s) until the next check 2025-05-30 02:00:08.842841 | orchestrator | 2025-05-30 02:00:08 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 02:00:08.842953 | orchestrator | 2025-05-30 02:00:08 | INFO  | Wait 1 second(s) until the next check 2025-05-30 02:00:11.894645 | orchestrator | 2025-05-30 02:00:11 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 02:00:11.894715 | orchestrator | 2025-05-30 02:00:11 | INFO  | Wait 1 second(s) until the next check 2025-05-30 02:00:14.945072 | orchestrator | 2025-05-30 02:00:14 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 02:00:14.945189 | orchestrator | 2025-05-30 02:00:14 | INFO  | Wait 1 second(s) until the next check 2025-05-30 02:00:17.989540 | orchestrator | 2025-05-30 02:00:17 | INFO  | Task fb4c5da4-6736-4528-a700-d20c81fc8612 is in state STARTED 2025-05-30 02:00:17.989628 | orchestrator | 2025-05-30 02:00:17 | INFO  | Wait 1 second(s) until the next check 2025-05-30 02:00:20.496215 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-30 02:00:20.498225 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-30 02:00:21.278063 | 2025-05-30 02:00:21.278249 | PLAY [Post output play] 2025-05-30 02:00:21.294217 | 2025-05-30 02:00:21.294365 | LOOP [stage-output : Register sources] 2025-05-30 02:00:21.364952 | 2025-05-30 02:00:21.365440 | TASK [stage-output : Check sudo] 2025-05-30 02:00:22.165049 | orchestrator | sudo: a password is required 2025-05-30 02:00:22.409297 | orchestrator | ok: Runtime: 0:00:00.014750 2025-05-30 02:00:22.416427 | 2025-05-30 02:00:22.416543 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-30 02:00:22.447705 | 2025-05-30 02:00:22.447920 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-30 02:00:22.525614 | orchestrator | ok 2025-05-30 02:00:22.534334 | 2025-05-30 02:00:22.534474 | LOOP [stage-output : Ensure target folders exist] 2025-05-30 02:00:22.944328 | orchestrator | ok: "docs" 2025-05-30 02:00:22.944668 | 2025-05-30 02:00:23.122878 | orchestrator | ok: "artifacts" 2025-05-30 02:00:23.326749 | orchestrator | ok: "logs" 2025-05-30 02:00:23.346284 | 2025-05-30 02:00:23.346478 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-30 02:00:23.384886 | 2025-05-30 02:00:23.385254 | TASK [stage-output : Make all log files readable] 2025-05-30 02:00:23.650694 | orchestrator | ok 2025-05-30 02:00:23.659322 | 2025-05-30 02:00:23.659475 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-30 02:00:23.694669 | orchestrator | skipping: Conditional result was False 2025-05-30 02:00:23.707634 | 2025-05-30 02:00:23.707944 | TASK [stage-output : Discover log files for compression] 2025-05-30 02:00:23.734973 | orchestrator | skipping: Conditional result was False 2025-05-30 02:00:23.759710 | 2025-05-30 02:00:23.760198 | LOOP [stage-output : Archive everything from logs] 2025-05-30 02:00:23.812496 | 2025-05-30 02:00:23.812733 | PLAY [Post cleanup play] 2025-05-30 02:00:23.823496 | 2025-05-30 02:00:23.823625 | TASK [Set cloud fact (Zuul deployment)] 2025-05-30 02:00:23.881825 | orchestrator | ok 2025-05-30 02:00:23.894862 | 2025-05-30 02:00:23.895025 | TASK [Set cloud fact (local deployment)] 2025-05-30 02:00:23.930600 | orchestrator | skipping: Conditional result was False 2025-05-30 02:00:23.948032 | 2025-05-30 02:00:23.948302 | TASK [Clean the cloud environment] 2025-05-30 02:00:24.529564 | orchestrator | 2025-05-30 02:00:24 - clean up servers 2025-05-30 02:00:25.401272 | orchestrator | 2025-05-30 02:00:25 - testbed-manager 2025-05-30 02:00:25.506381 | orchestrator | 2025-05-30 02:00:25 - testbed-node-3 2025-05-30 02:00:25.600748 | orchestrator | 2025-05-30 02:00:25 - testbed-node-4 2025-05-30 02:00:25.690538 | orchestrator | 2025-05-30 02:00:25 - testbed-node-1 2025-05-30 02:00:25.775901 | orchestrator | 2025-05-30 02:00:25 - testbed-node-0 2025-05-30 02:00:25.868737 | orchestrator | 2025-05-30 02:00:25 - testbed-node-5 2025-05-30 02:00:25.957865 | orchestrator | 2025-05-30 02:00:25 - testbed-node-2 2025-05-30 02:00:26.043853 | orchestrator | 2025-05-30 02:00:26 - clean up keypairs 2025-05-30 02:00:26.062070 | orchestrator | 2025-05-30 02:00:26 - testbed 2025-05-30 02:00:26.085925 | orchestrator | 2025-05-30 02:00:26 - wait for servers to be gone 2025-05-30 02:00:34.852172 | orchestrator | 2025-05-30 02:00:34 - clean up ports 2025-05-30 02:00:35.046416 | orchestrator | 2025-05-30 02:00:35 - 7c20c5a5-779c-4313-8249-3ee0be5ed13d 2025-05-30 02:00:35.341584 | orchestrator | 2025-05-30 02:00:35 - 847b04d7-0cfd-4f66-b3fd-05673c8e42cf 2025-05-30 02:00:35.644338 | orchestrator | 2025-05-30 02:00:35 - 9e0a8cd1-92c2-44d1-8518-06325ccd0cdb 2025-05-30 02:00:35.919808 | orchestrator | 2025-05-30 02:00:35 - a153f5b9-f51b-4c62-994f-4499ff56957f 2025-05-30 02:00:36.167773 | orchestrator | 2025-05-30 02:00:36 - ac1692ab-98fa-479d-b591-d5da7116a4eb 2025-05-30 02:00:36.600377 | orchestrator | 2025-05-30 02:00:36 - ee7d5088-2418-43cd-9176-6e6a666d8f96 2025-05-30 02:00:36.850686 | orchestrator | 2025-05-30 02:00:36 - f5387598-7596-4a7d-a4e0-620fd1f63c9b 2025-05-30 02:00:37.080723 | orchestrator | 2025-05-30 02:00:37 - clean up volumes 2025-05-30 02:00:37.196535 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-0-node-base 2025-05-30 02:00:37.233673 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-5-node-base 2025-05-30 02:00:37.274973 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-3-node-base 2025-05-30 02:00:37.317133 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-1-node-base 2025-05-30 02:00:37.356998 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-4-node-base 2025-05-30 02:00:37.401974 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-2-node-base 2025-05-30 02:00:37.446756 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-manager-base 2025-05-30 02:00:37.489384 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-1-node-4 2025-05-30 02:00:37.531407 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-7-node-4 2025-05-30 02:00:37.573709 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-6-node-3 2025-05-30 02:00:37.611807 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-2-node-5 2025-05-30 02:00:37.655895 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-5-node-5 2025-05-30 02:00:37.699047 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-0-node-3 2025-05-30 02:00:37.768104 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-3-node-3 2025-05-30 02:00:37.812506 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-8-node-5 2025-05-30 02:00:37.853662 | orchestrator | 2025-05-30 02:00:37 - testbed-volume-4-node-4 2025-05-30 02:00:37.896828 | orchestrator | 2025-05-30 02:00:37 - disconnect routers 2025-05-30 02:00:38.051184 | orchestrator | 2025-05-30 02:00:38 - testbed 2025-05-30 02:00:39.489648 | orchestrator | 2025-05-30 02:00:39 - clean up subnets 2025-05-30 02:00:39.560926 | orchestrator | 2025-05-30 02:00:39 - subnet-testbed-management 2025-05-30 02:00:39.729854 | orchestrator | 2025-05-30 02:00:39 - clean up networks 2025-05-30 02:00:39.902728 | orchestrator | 2025-05-30 02:00:39 - net-testbed-management 2025-05-30 02:00:40.199541 | orchestrator | 2025-05-30 02:00:40 - clean up security groups 2025-05-30 02:00:40.241844 | orchestrator | 2025-05-30 02:00:40 - testbed-management 2025-05-30 02:00:40.369170 | orchestrator | 2025-05-30 02:00:40 - testbed-node 2025-05-30 02:00:40.481577 | orchestrator | 2025-05-30 02:00:40 - clean up floating ips 2025-05-30 02:00:40.515116 | orchestrator | 2025-05-30 02:00:40 - 81.163.193.162 2025-05-30 02:00:40.924338 | orchestrator | 2025-05-30 02:00:40 - clean up routers 2025-05-30 02:00:41.491809 | orchestrator | 2025-05-30 02:00:41 - testbed 2025-05-30 02:00:42.516708 | orchestrator | ok: Runtime: 0:00:18.053783 2025-05-30 02:00:42.521055 | 2025-05-30 02:00:42.521311 | PLAY RECAP 2025-05-30 02:00:42.521441 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-30 02:00:42.521506 | 2025-05-30 02:00:42.658642 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-30 02:00:42.661019 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-30 02:00:43.407767 | 2025-05-30 02:00:43.407934 | PLAY [Cleanup play] 2025-05-30 02:00:43.424260 | 2025-05-30 02:00:43.424398 | TASK [Set cloud fact (Zuul deployment)] 2025-05-30 02:00:43.477823 | orchestrator | ok 2025-05-30 02:00:43.485980 | 2025-05-30 02:00:43.486146 | TASK [Set cloud fact (local deployment)] 2025-05-30 02:00:43.520357 | orchestrator | skipping: Conditional result was False 2025-05-30 02:00:43.538388 | 2025-05-30 02:00:43.538558 | TASK [Clean the cloud environment] 2025-05-30 02:00:44.719959 | orchestrator | 2025-05-30 02:00:44 - clean up servers 2025-05-30 02:00:45.271576 | orchestrator | 2025-05-30 02:00:45 - clean up keypairs 2025-05-30 02:00:45.289677 | orchestrator | 2025-05-30 02:00:45 - wait for servers to be gone 2025-05-30 02:00:45.331610 | orchestrator | 2025-05-30 02:00:45 - clean up ports 2025-05-30 02:00:45.418657 | orchestrator | 2025-05-30 02:00:45 - clean up volumes 2025-05-30 02:00:45.481486 | orchestrator | 2025-05-30 02:00:45 - disconnect routers 2025-05-30 02:00:45.510146 | orchestrator | 2025-05-30 02:00:45 - clean up subnets 2025-05-30 02:00:45.531707 | orchestrator | 2025-05-30 02:00:45 - clean up networks 2025-05-30 02:00:45.698911 | orchestrator | 2025-05-30 02:00:45 - clean up security groups 2025-05-30 02:00:45.731614 | orchestrator | 2025-05-30 02:00:45 - clean up floating ips 2025-05-30 02:00:45.757190 | orchestrator | 2025-05-30 02:00:45 - clean up routers 2025-05-30 02:00:46.078723 | orchestrator | ok: Runtime: 0:00:01.430331 2025-05-30 02:00:46.082048 | 2025-05-30 02:00:46.082225 | PLAY RECAP 2025-05-30 02:00:46.082337 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-30 02:00:46.082392 | 2025-05-30 02:00:46.220430 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-30 02:00:46.221490 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-30 02:00:46.959012 | 2025-05-30 02:00:46.959213 | PLAY [Base post-fetch] 2025-05-30 02:00:46.977745 | 2025-05-30 02:00:46.978098 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-30 02:00:47.044485 | orchestrator | skipping: Conditional result was False 2025-05-30 02:00:47.059591 | 2025-05-30 02:00:47.059806 | TASK [fetch-output : Set log path for single node] 2025-05-30 02:00:47.110987 | orchestrator | ok 2025-05-30 02:00:47.120222 | 2025-05-30 02:00:47.120378 | LOOP [fetch-output : Ensure local output dirs] 2025-05-30 02:00:47.632855 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/94fe11b4cd544891847b158adf92cff0/work/logs" 2025-05-30 02:00:47.915567 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/94fe11b4cd544891847b158adf92cff0/work/artifacts" 2025-05-30 02:00:48.198279 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/94fe11b4cd544891847b158adf92cff0/work/docs" 2025-05-30 02:00:48.222290 | 2025-05-30 02:00:48.222498 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-30 02:00:49.219904 | orchestrator | changed: .d..t...... ./ 2025-05-30 02:00:49.220390 | orchestrator | changed: All items complete 2025-05-30 02:00:49.220482 | 2025-05-30 02:00:49.965991 | orchestrator | changed: .d..t...... ./ 2025-05-30 02:00:50.715030 | orchestrator | changed: .d..t...... ./ 2025-05-30 02:00:50.747085 | 2025-05-30 02:00:50.747335 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-30 02:00:50.788665 | orchestrator | skipping: Conditional result was False 2025-05-30 02:00:50.792635 | orchestrator | skipping: Conditional result was False 2025-05-30 02:00:50.810074 | 2025-05-30 02:00:50.810230 | PLAY RECAP 2025-05-30 02:00:50.810316 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-30 02:00:50.810359 | 2025-05-30 02:00:50.935834 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-30 02:00:50.939459 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-30 02:00:51.696458 | 2025-05-30 02:00:51.696639 | PLAY [Base post] 2025-05-30 02:00:51.711419 | 2025-05-30 02:00:51.711566 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-30 02:00:52.701332 | orchestrator | changed 2025-05-30 02:00:52.714347 | 2025-05-30 02:00:52.714504 | PLAY RECAP 2025-05-30 02:00:52.714599 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-30 02:00:52.714701 | 2025-05-30 02:00:52.839318 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-30 02:00:52.840331 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-30 02:00:53.630557 | 2025-05-30 02:00:53.630736 | PLAY [Base post-logs] 2025-05-30 02:00:53.641776 | 2025-05-30 02:00:53.641922 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-30 02:00:54.168518 | localhost | changed 2025-05-30 02:00:54.184097 | 2025-05-30 02:00:54.184355 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-30 02:00:54.213490 | localhost | ok 2025-05-30 02:00:54.220213 | 2025-05-30 02:00:54.220396 | TASK [Set zuul-log-path fact] 2025-05-30 02:00:54.239330 | localhost | ok 2025-05-30 02:00:54.254605 | 2025-05-30 02:00:54.254749 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-30 02:00:54.293395 | localhost | ok 2025-05-30 02:00:54.300836 | 2025-05-30 02:00:54.301031 | TASK [upload-logs : Create log directories] 2025-05-30 02:00:54.849930 | localhost | changed 2025-05-30 02:00:54.854508 | 2025-05-30 02:00:54.854653 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-30 02:00:55.358763 | localhost -> localhost | ok: Runtime: 0:00:00.007058 2025-05-30 02:00:55.367881 | 2025-05-30 02:00:55.368080 | TASK [upload-logs : Upload logs to log server] 2025-05-30 02:00:55.965561 | localhost | Output suppressed because no_log was given 2025-05-30 02:00:55.969335 | 2025-05-30 02:00:55.969521 | LOOP [upload-logs : Compress console log and json output] 2025-05-30 02:00:56.027807 | localhost | skipping: Conditional result was False 2025-05-30 02:00:56.032848 | localhost | skipping: Conditional result was False 2025-05-30 02:00:56.044740 | 2025-05-30 02:00:56.044949 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-30 02:00:56.099059 | localhost | skipping: Conditional result was False 2025-05-30 02:00:56.099636 | 2025-05-30 02:00:56.103150 | localhost | skipping: Conditional result was False 2025-05-30 02:00:56.118975 | 2025-05-30 02:00:56.119297 | LOOP [upload-logs : Upload console log and json output]